Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,500 | 5,072 | Efficient Online Inference for Bayesian
Nonparametric Relational Models
Dae Il Kim1 , Prem Gopalan2 , David M. Blei2 , and Erik B. Sudderth1
1
2
Department of Computer Science, Brown University, {daeil,sudderth}@cs.brown.edu
Department of Computer Science, Princeton University, {pgopalan,blei}@cs.princeton.edu
Abstract
Stochastic block models characterize observed network relationships via latent
community memberships. In large social networks, we expect entities to participate in multiple communities, and the number of communities to grow with the
network size. We introduce a new model for these phenomena, the hierarchical
Dirichlet process relational model, which allows nodes to have mixed membership
in an unbounded set of communities. To allow scalable learning, we derive an online stochastic variational inference algorithm. Focusing on assortative models of
undirected networks, we also propose an efficient structured mean field variational
bound, and online methods for automatically pruning unused communities. Compared to state-of-the-art online learning methods for parametric relational models,
we show significantly improved perplexity and link prediction accuracy for sparse
networks with tens of thousands of nodes. We also showcase an analysis of LittleSis, a large network of who-knows-who at the heights of business and government.
1
Introduction
A wide range of statistical models have been proposed for the discovery of hidden communities
within observed networks. The simplest stochastic block models [20] create communities by clustering nodes, aiming to identify demographic similarities in social networks, or proteins with related
functional interactions. The mixed-membership stochastic blockmodel (MMSB) [1] allows nodes
to be members of multiple communities; this generalization substantially improves predictive accuracy in real-world networks. These models are practically limited by the need to externally specify
the number of latent communities. We propose a novel hierarchical Dirichlet process relational
(HDPR) model, which allows mixed membership in an unbounded collection of latent communities.
By adapting the HDP [18], we allow data-driven inference of the number of communities underlying
a given network, and growth in the community structure as additional nodes are observed.
The infinite relational model (IRM) [10] previously adapted the Dirichlet process to define a nonparametric relational model, but restrictively associates each node with only one community. The
more flexible nonparametric latent feature model (NLFM) [14] uses an Indian buffet process
(IBP) [7] to associate nodes with a subset of latent communities. The infinite multiple membership relational model (IMRM) [15] also uses an IBP to allow multiple memberships, but uses a
non-conjugate observation model to allow more scalable inference for sparse networks. The nonparametric metadata dependent relational (NMDR) model [11] employs a logistic stick-breaking
prior on the node-specific community frequencies, and thereby models relationships between communities and metadata. All of these previous nonparametric relational models employed MCMC
learning algorithms. In contrast, the conditionally conjugate structure of our HDPR model allows us
to easily develop a stochastic variational inference algorithm [17, 2, 9]. Its online structure, which
incrementally updates global community parameters based on random subsets of the full graph, is
highly scalable; our experiments consider social networks with tens of thousands of nodes.
1
While the HDPR is more broadly applicable, our focus in this paper is on assortative models for
undirected networks, which assume that the probability of linking distinct communities is small.
This modeling choice is appropriate for the clustered relationships found in friendship and collaboration networks. Our work builds on stochastic variational inference methods developed for the
assortative MMSB (aMMSB) [6], but makes three key technical innovations. First, adapting work
on HDP topic models [19], we develop a nested family of variational bounds which assign positive probability to dynamically varying subsets of the unbounded collection of global communities.
Second, we use these nested bounds to dynamically prune unused communities, improving computational speed, predictive accuracy, and model interpretability. Finally, we derive a structured mean
field variational bound which models dependence among the pair of community assignments associated with each edge. Crucially, this avoids the expensive and inaccurate local optimizations required
by naive mean field approximations [1, 6], while maintaining computation and storage requirements
that scale linearly (rather than quadratically) with the number of hypothesized communities.
In this paper, we use our assortative HDPR (aHDPR) model to recover latent communities in social networks previously examined with the aMMSB [6], and demonstrate substantially improved
perplexity scores and link prediction accuracy. We also use our learned community structure to
visualize business and governmental relationships extracted from the LittleSis database [13].
2
Assortative Hierarchical Dirichlet Process Relational Models
We introduce the assortative HDP relational (aHDPR) model, a nonparametric generalization of the
aMMSB for discovering shared memberships in an unbounded collection of latent communities.
We focus on undirected binary graphs with N nodes and E = N (N ? 1)/2 possible edges, and let
yij = yji = 1 if there is an edge between nodes i and j. For some experiments, we assume the yij
variables are only partially observed to compare the predictive performance of different models.
As summarized in the graphical models of Fig. 1, we begin by defining a global Dirichlet process to
capture the parameters associated with each community. Letting ?k denote the expected frequency
of community k, and ? > 0 the concentration, we define a stick-breaking representation of ?:
? k = vk
k?1
Y
(1 ? v` ),
vk ? Beta(1, ?),
k = 1, 2, . . .
(1)
`=1
Adapting a two-layer hierarchical DP [18], the mixed community memberships for each node i are
then drawn from DP with base measure ?, ?i ? DP(??). Here, E[?i | ?, ?] = ?, and small
precisions ? encourage nodes to place most of their mass on a sparse subset of communities.
To generate a possible edge yij between nodes i and j, we first sample a pair of indicator variables
from their corresponding community membership distributions, sij ? Cat(?i ), rij ? Cat(?j ). We
then determine edge presence as follows:
p(yij = 1 | sij = rij = k) = wk ,
p(yij = 1 | sij 6= rij ) = .
(2)
For our assortative aHDPR model, each community has its own self-connection probability
wk ? Beta(?a , ?b ). To capture the sparsity of real networks, we fix a very small probability of
between-community connection, = 10?30 . Our HDPR model could easily be generalized to more
flexible likelihoods in which each pair of communities k, ` have their own interaction probability [1],
but motivated by work on the aMMSB [6], we do not pursue this generalization here.
3
Scalable Variational Inference
Previous applications of the MMSB associate a pair of community assignments, sij and rij , with
each potential edge yij . In assortative models these variables are strongly dependent, since present
edges only have non-negligible probability for consistent community assignments. To improve accuracy and reduce local optima, we thus develop a structured variational method based on joint
configurations of these assignment pairs, which we denote by eij = (sij , rij ). See Figure 1.
Given this alternative representation, we aim to approximate the joint distribution of the observed
edges y, local community assignments e, and global community parameters ?, w, ? given fixed
2
?i
k
?
N
?
?
rij
sij
yij
?
?
N
?
wk
E
eij
?i
k
yij
wk
E
?
?
Figure 1: Alternative graphical representations of the aHDPR model, in which each of N nodes has mixed
membership ?i in an unbounded set of latent communities, wk are the community self-connection probabilities,
and yij indicates whether an edge is observed between nodes i and j. Left: Conventional representation,
in which source sij and receiver rij community assignments are independently sampled. Right: Blocked
representation in which eij = (sij , rij ) denotes the pair of community assignments underlying yij .
hyperparameters ?, ?, ?. Mean field variational methods minimize the KL divergence between a
family of approximating distributions q(e, ?, w, ?) and the true posterior, or equivalently maximize
the following evidence lower bound (ELBO) on the marginal likelihood of the observed edges y:
L(q) , Eq [log p(y, e, ?, w, ? | ?, ?, ?)] ? Eq [log q(e, ?, w, ?)].
(3)
For the nonparametric aHDPR model, the number of latent community parameters wk , ?k , and the
dimensions of the community membership distributions ?i , are both infinite. Care must thus be
taken to define a tractably factorized, and finitely parameterized, variational bound.
3.1
Variational Bounds via Nested Truncations
We begin by defining categorical edge assignment distributions q(eij | ?ij ) = Cat(eij | ?ij ),
where ?ijk` = q(eij = (k, `)) = q(sij = k, rij = `). For some truncation level K, which will be
dynamically varied by our inference algorithms, we constrain ?ijk` = 0 if k > K or ` > K.
Given this restriction, all observed interactions are explained by one of the first (and under the
stick-breaking prior, most probable) K communities. The resulting variational distribution has K 2
parameters. This truncation approach extends prior work for HDP topic models [19, 5].
For the global community parameters, we define an untruncated factorized variational distribution:
q(?, w | v ? , ?) =
?
Y
?vk? (vk )Beta(wk | ?ka , ?kb ),
k=1
?k (v ? ) = vk?
k?1
Y
(1 ? v`? ).
(4)
`=1
Our later derivations show that for communities k > K above the truncation level, the optimal
variational parameters equal the prior: ?ka = ?a , ?kb = ?b . These distributions thus need not be
explicitly represented. Similarly, the objective only depends on vk? for k ? K, defining K + 1
probabilities: the frequencies of the first K communities, and the aggregate frequency of all others.
Matched to this, we associate a (K + 1)-dimensional community membership distribution ?i to
each node, where the final component contains the sum of all mass not assigned to the first K.
Exploiting the fact that the Dirichlet process induces a Dirichlet distribution on any finite partition,
we let q(?i | ?i ) = Dir(?i | ?i ), ?i ? RK+1 . The overall variational objective is then
P
L(q) = k Eq [log p(wk | ?a , ?b )] ? Eq [log q(wk | ?ka , ?kb )] + Eq [log p(vk? | ?)]
(5)
P
?
+ i Eq [log p(?i | ?, ?(v ))] ? Eq [log q(?i | ?i )]
P
+ ij Eq [log p(yij |w, eij )] + Eq [log p(eij |?i , ?j )] ? Eq [log q(eij |?ij )].
Unlike truncations of the global stick-breaking process [4], our variational bounds are nested, so that
lower-order approximations are special cases of higher-order ones with some zeroed parameters.
3.2
Structured Variational Inference with Linear Time and Storage Complexity
Conventional, coordinate ascent variational inference algorithms iteratively optimize each parameter
given fixed values for all others. Community membership and interaction parameters are updated as
PE PK
PE PK
?ka = ?a + ij k=1 ?ijkk yij ,
?kb = ?b + ij k=1 ?ijkk (1 ? yij ),
(6)
P
PK
?ik = ??k + (i,j)?E `=1 ?ijk` .
(7)
3
Here, the final summation is over all potential edges (i, j) linked to node i. Updates for assignment
distributions depend on expectations of log community assignment probabilities:
Eq [log(wk )] = ?(?ka ) ? ?(?ka + ?kb ),
?
?ik
Eq [log(1 ? wk )] = ?(?kb ) ? ?(?ka + ?kb ),
PK+1
PK
, exp{Eq [log(?ik )]} = exp{?(?ik ) ? ?( `=1 ?i` )},
?
?i , k=1 ?
?ik .
(8)
(9)
Given these sufficient statistics, the assignment distributions can be updated as follows:
?ijkk ? ?
?ik ?
?jk f (wk , yij ),
?ijk` ? ?
?ik ?
?j` f (, yij ), ` 6= k.
(10)
(11)
Here, f (wk , yij ) = exp{yij Eq [log(wk )] + (1 ? yij )Eq [log(1 ? wk )]}. More detailed derivations of
related updates have been developed for the MMSB [1].
A naive implementation of these updates would require O(K 2 ) computation and storage for each
assignment distribution q(eij | ?ij ). Note, however, that the updates for q(wk | ?k ) in Eq. (6)
depend only on the K probabilities ?ijkk that nodes select the same community. Using the updates
for ?ijk` from Eq. (11), the update of q(?i | ?i ) in Eq. (7) can be expanded as follows:
P
P
?ik = ??k + (i,j)?E ?ijkk + Z1ij `6=k ?
?ik ?
?j` f (, yij )
(12)
P
= ??k + (i,j)?E ?ijkk + Z1ij ?
?ik f (, yij )(?
?j ? ?
?jk ).
Note that ?
?j need only be computed once, in O(K) operations. The normalization constant Zij ,
which is defined so that ?ij is a valid categorical distribution, can also be computed in linear time:
PK
Zij = ?
?i ?
?j f (, yij ) + k=1 ?
?ik ?
?jk (f (wk , yij ) ? f (, yij )).
(13)
Finally, to evaluate our variational bound and assess algorithm convergence, we still need to calculate
the likelihood and entropy terms dependent on ?ijk` . However, we can compute part of our bound
by caching our partition function Zij in linear time. See ?A.2 for details regarding the full derivation
of this ELBO and its extensions.
3.3
Stochastic Variational Inference
Standard variational batch updates become computationally intractable when N becomes very large.
Recent advancements in applying stochastic optimization techniques within variational inference [8]
showed that if our variational mean-field family of distributions are members of the exponential
family, we can derive a simple stochastic natural gradient update for our global parameters ?, ?, v.
These gradients can be calculated from only a subset of the data and are noisy approximations of the
true natural gradient for the variational objective, but represent an unbiased estimate of that gradient.
To accomplish this, we define a new variational objective with respect to our current set of observations. This function, in expectation, is equivalent to our true ELBO. By taking natural gradients
with respect to our new variational objective for our global variables ?, ?, we have
???ka =
?
??ik
=
1
g(i,j) ?ijkk yij
(14)
1
g(i,j)
(15)
+ ?a ? ?ka ;
PK
(i,j)?E
`=1 ?ijk` + ??k ? ?ik ,
P
where the natural gradient for ???kb is symmetric to ???ka and where yij in Eq. (14) is replaced by
P
PK
(1 ? yij ). Note that (i,j)?E `=1 ?ijk` was shown in the previous section to be computable in
O(K). The scaling term g(i, j) is needed for an unbiased update to our expectation. If g(i, j) =
2/N (N ? 1), then this would represent a uniform distribution over possible edge selections in our
undirected graphs. In general, g(i, j) can be an arbitrary distribution over possible edge selections
such as a distribution over sets of edges as long as the expectation with respect to this distribution is
equivalent to the original ELBO [6]. When referring to the scaling constant associated with sets, we
consider the notation of h(T ) instead of g(i, j).
We optimize this ELBO with a Robbins-Monro algorithm which iteratively steps along the direction
of this noisy gradient. We specify a learning rate ?t , (?0 + t)?? at time t whereP? ? (.5, 1] and
?0 ? 0 downweights the influence of earlier updates. With the requirement that t ?2t < ? and
4
P
t ?t = ?, we will provably converge to a local optimum. For our global variational parameters
{?, ?}, the updates at iteration t are now
t?1
?
1
?tka = ?t?1
ka + ?t (??ka ) = (1 ? ?t )?ka + ?t ( g(i,j) ?ijkk yij + ?a );
P
PK
t?1
t?1
t
?
1
?ik
= ?ik
+ ?t (??ik
) = (1 ? ?t )?ik
+ ?t ( g(i,j)
(i,j)?E
`=1 ?ijk` + ??k );
vkt = (1 ? ?t )vkt?1 + ?t (vk? ),
(16)
(17)
(18)
where vk? is obtained via a constrained optimization task using the gradients derived in ?A.3. Defining an update on our global parameters given a single edge observation can result in very poor local
optima. In practice, we specify a mini-batch T , a set of unique observations in determining a noisy
gradient that is more informative. This results in a simple summation over the sufficient statistics
associated with the set of observations as well as a change to g(i, j) to reflect the necessary scaling
of our gradients when we can no longer assume our samples are uniformly chosen from our dataset.
3.4
Restricted Stratified Node Sampling
Stochastic variational inference provides us with the ability to choose a sampling scheme that allows us to better exploit the sparsity of real world networks. Given the success of stratified node
sampling [6], we consider this technique for all our experiments. Briefly, stratified node-sampling
randomly selects a single node i and either chooses its associated links or a set of edges from
m equally sized non-link edge sets. For this mini-batch strategy, h(T ) = 1/N for link sets and
h(T ) = 1/N m for a partitioned non-link set. In [6], all nodes in ? were considered global parameters and updated after each mini-batch. For our model, we also treat ? similarly, but maintain a
separate learning rate ?i for each node. This allows us to focus on updating only nodes that are relevant to our mini-batch as well as limit the computational costs associated with this global update. To
ensure that our Robbins-Monro conditions are still satisfied, we set the learning rate for nodes that
are not part of our mini-batch to be 0. When a new minibatch contains this particular node, we look
to the most previous learning rate and assume this value as the previous learning
This modified
Prate.
2
subsequence
of
learning
rates
maintains
our
convergence
criterion
so
that
the
?
< ? and that
it
t
P
t ?it = ?. We show how performing this simple modification results in significant improvements
in both perplexity and link prediction scores.
3.5
Pruning Moves
Our nested truncation requires setting an initial number of communities K. A large truncation lets
the posterior find the best number of communities, but can be computationally costly. A truncation
set too small may not be expressive enough to capture the best approximate posterior. To remedy
this, we define a set of pruning moves aimed at improving inference by removing communities that
have very small posterior mass. Pruning moves provide the model with a more parsimonious and
interpretable latent structure, and may also significantly reduce the computational cost of subsequent
iterations. Figure 2 provides an example illustrating how pruning occurs in our model.
To determine communities which are good candidates for pruning, for each community k we first
PN
PN PK
compute ?k = ( i=1 ?ik )/( i=1 k=1 ?ik ). Any community for which ?k < (log K)/N for
?
t = N/2 consecutive iterations is then evaluated in more depth. We scale t? with the number of
nodes N within the graph to ensure that a broad set of observations are accounted for. To estimate
an approximate but still informative ELBO for the pruned model, we must associate a set of relevant
observations to each pruning candidate. In particular, we approximate the pruned ELBO L(q prune ) by
considering observations yij among pairs of nodes with significant mass in the pruned community.
We also calculate L(q old ) from these same observations, but with the old model parameters. We
then compare these two values to accept or reject the pruning of the low-weight community.
4
Experiments
In this section we perform experiments that compare the performance of the aHDPR model to the
aMMSB. We show significant gains in AUC and perplexity scores by using the restricted form of
5
New Model
Adjacency Matrix
?
??
Y
N nodes
K communities
Prune k=3
v? ,
?
? ,
?k < (log K)/N
?k = (
L(qold)
L(qprune)
PN
i=1
?ik )/(
PN PK
i=1
k=1
?ik )
Uniformly redistribute
mass. Perform similar
operation for other latent
variables.
Select nodes relevant to pruned
topic and its corresponding subgraph (red box) to generate a new
ELBO: L(qprune)
?
,
?
?
ij2S
v, ,
?,
ij2S
If L(qprune) > L(qold), accept
or else reject and continue
inference with old model
Figure 2: Pruning extraneous communities. Suppose
that community k = 3 is considered for removal.
P
We specify a new model by redistributing its mass N
i=1 ?i3 uniformly across the remaining communities
?i` , ` 6= 3. An analogous operation is used to generate {v ? , ? ? , ??a , ??b , ?? }. To accurately estimate the true
change in ELBO for this pruning, we select the n? = 10 nodes with greatest participation ?i3 in community 3.
Let S denote the set of all pairs of these nodes, and yij?S their observed relationships. From these observations
we can estimate ??ij?S for a model in which community k = 3 is pruned, and a corresponding ELBO L(q prune ).
Using the data from the same sub-graph, but the old un-pruned model parameters, we estimate an alternative
ELBO L(q old ). We accept if L(q prune ) > L(q old ), and reject otherwise. Because our structured mean-field
approach provides simple direct updates for ??ij?S , the calculation of L(q old ) and L(q prune ) is efficient.
stratified node sampling, a quick K-means initialization1 for ?, and our efficient structured meanfield approach combined with pruning moves. We perform a detailed comparison on a synthetic toy
dataset, as well as the real-world relativity collaboration network, using a variety of metrics to show
the benefits of each contribution. We then show significant improvements over the baseline aMMSB
model in both AUC and perplexity metrics on several real-world datasets previously analyzed by [6].
Finally, we perform a qualitative analysis on the LittleSis network and demonstrate the usefulness
of using our learned latent community structure to create visualizations of large networks. For
additional details on the parameters used in these experiments, please see ?A.1.
4.1
Synthetic and Collaboration Networks
The synthetic network we use for testing is generated from the standards and software outlined
in [12] to produce realistic synthetic networks with overlapping communities and power-law degree
distributions. For these purposes, we set the number of nodes N = 1000, with the minimum degree
per node set to 10 and its maximum to 60. On this network the true number of latent communities
was found to be K = 56. Our real world networks include 5 undirected networks originally ranging
from N = 5, 242 to N = 27, 770. These raw networks, however, contain several disconnected components. Both the aMMSB and aHDPR achieve highest posterior probability by assigning each connected component distinct, non-overlapping communities; effectively, they analyze each connected
sub-graph independently. To focus on the more challenging problem of identifying overlapping
community structure, we take the largest connected component of each graph for analysis.
Initialization and Node-Specific Learning Rates. The upper-left panels in Fig. 3 compare different aHDPR inference algorithms, and the perplexity scores achieved on various networks. Here we
demonstrate the benefits of initializing ? via K-means, and our restricted stratified node sampling
procedure. For our random initializations, we initalized ? in the same fashion as the aMMSB. Using
a combination of both modifications, we achieve the best perplexity scores on these datasets. The
node-specific learning rates intuitively restrict updates for ? to batches containing relevant observations, while our K-means initialization quickly provides a reasonable single-membership partition
as a starting point for inference.
Naive Mean-Field vs. Structured Mean-Field. The naive mean-field approach is the aHDPR
model where the community indicator assignments are split into sij and rij . This can result in
severe local optima due to their coupling as seen in some experiments in Fig. 4. The aMMSB in some
1
Our K-means initialization views the rows of the adjacency matrix as distinct data points and produces a
single community assignment zi for each node. To initialize community membership distributions based on
these assignments, we set ?izi = N ? 1 and ?i\zi = ?.
6
Mini?Batch Strategies Relativity, K=250 (aHDPR?Fixed)
Mini?Batch Strategies TOY, K=56 (aHDPR?Fixed)
Average Perplexity vs. K (Toy)
Average Perplexity vs. K (Relativity)
7
Random Init?All
Random Init?Restricted
Kmeans Init?All
Kmeans Init?Restricted
12
11
Random Init?All
Random Init?Restricted
Kmeans Init?All
Kmeans Init?Restricted
40
35
20
aMMSB
aHDPR?K100
aHDPR?Pruning?K100
aHDPR?Pruning?K200
6.5
6
18
aMMSB
aHDPR?K500
aHDPR?Pruning
16
10
7
6
Perplexity
Perplexity
Perplexity
8
25
20
14
Perplexity
5.5
30
9
5
4.5
12
10
4
8
3.5
6
15
5
10
4
3
5
3
2
4
6
8
10
12
Number of Observed Edges
14
16
6
x 10
1
2
3
4
Number of Observed Edges
4
2.5
20
5
7
x 10
40
56
60
80
Number of Communities K (aMMSB)
Perplexity Relativity N=4158
Perplexity TOY N=1000
2
150
100
200
Pruning process for Toy Data
250
300
350
Number of Communities K (aMMSB)
400
K after Pruning (aHDPR Initial K=500)
200
7
6
25
20
15
5
10
4
160
140
120
100
80
60
40
3
2
4
6
8
10
12
Number of Observed Edges
14
400
350
300
250
20
5
2
Init K=100
Init K=200
True K=56
180
Number of Communities Used
Perplexity
8
aMMSB?K150
aMMSB?K200
aHDPR?Naive?K500
aHDPR?K500
aHDPR?Pruning
30
Number of Communities
9
Perplexity
aMMSB?K20
aMMSB?K56
aHDPR?Naive?K56
aHDPR?Batch?K56
aHDPR?K56
aHDPR?Pruning?K200
aHDPR?Truth
10
16
6
x 10
1
2
3
4
Number of Observed Edges
0
5
7
x 10
5
10
6
10
Number of Observed Edges (Log Axis)
rel
7
hep2
hep
astro
cm
10
Figure 3: The upper left shows benefits of a restricted update and a K-means initialization for stratified node
sampling on both synthetic and relativity networks. The upper right shows the sensitivity of the aMMSB as
K varies versus the aHDPR. The lower left shows various perplexity scores for the synthetic and relativity
networks with the best performing model (aHDPR-Pruning) scoring an average AUC of 0.9675 ? .0017 on the
synthetic network and 0.9466 ? .0062 on the relativity network. The lower right shows the pruning process for
the toy data and the final K communities discovered on our real-world networks.
instances performs better than the naive mean-field approach, but this can be due to differences in our
initialization procedures. However, by changing our inference procedure to an efficient structured
mean-field approach, we see significant improvements across all datasets.
Benefits of Pruning Moves. Pruning moves were applied every N/2 iterations with a maximum
of K/10 communities removed per move. If the number of prune candidates was greater than
K/10, then K/10 communities with the lowest mass were chosen. The lower right portion of Fig. 3
shows that our pruning moves can learn close to the true underlying number of clusters (K=56) on a
synthetic network even when significantly altering its initial K. Across several real world networks,
there was low variance between runs with respect to the final K communities discovered, suggesting
a degree of robustness. Furthermore, pruning moves improved perplexity and AUC scores across
every dataset as well as reducing computational costs during inference.
Perplexity Hep2 N=7464
55
45
aMMSB?K250
aMMSB?K300
aHDPR?Naive?K500
aHDPR?K500
aHDPR?Pruning
30
25
Perplexity Condensed Matter N=21363
70
aMMSB?K300
aMMSB?K350
aHDPR?Naive?K500
aHDPR?K500
aHDPR?Pruning
35
30
15
Perplexity
25
20
Perplexity
30
25
20
15
10
15
20
10
10
5
1
2
3
4
5
6
7
Number of Observed Edges
8
9
7
x 10
2
AUC Hep2 N=7464
4
6
8
10
Number of Observed Edges
12
14
7
x 10
0.5
AUC Hep N=11204
1
1.5
Number of Observed Edges
10
2
AUC AstroPhysics N=17903
1
1
1
0.95
0.95
0.95
0.8
0.75
0.85
0.8
0.75
0.85
0.8
0.75
0.7
0.7
0.7
0.7
0.65
0.65
0.65
0.65
aMMSB
K200
aMMSB
aHDPR
aHDPR
K250 Naive?K500 K500
aHDPR
Pruning
0.6
aMMSB
K250
aMMSB
aHDPR
aHDPR
K300 Naive?K500 K500
0.6
aHDPR
Pruning
2.5
8
x 10
0.9
AUC Quantiles
0.8
0.75
0.9
AUC Quantiles
AUC Quantiles
0.9
0.85
1
1.5
2
Number of Observed Edges
AUC Condensed Matter N=21363
1
0.85
0.5
8
x 10
0.95
0.9
AUC Quantiles
40
30
20
0.6
aMMSB?K400
aMMSB?K450
aHDPR?Naive?K500
aHDPR?K500
aHDPR?Pruning
60
50
35
Perplexity
Perplexity
40
Perplexity AstroPhysics N=17903
Perplexity Hep N=11204
aMMSB?K150
aMMSB?K200
aHDPR?Naive?K500
aHDPR?K500
aHDPR?Pruning
50
aMMSB
K300
aMMSB
aHDPR
aHDPR
K350 Naive?K500 K500
aHDPR
Pruning
0.6
aMMSB
K400
aMMSB
aHDPR
aHDPR
K450 Naive?K500 K500
aHDPR
Pruning
Figure 4: Analysis of four real-world collaboration networks. The figures above show that the aHDPR with
pruning moves has the best performance, in terms of both perplexity (top) and AUC (bottom) scores.
4.2
The LittleSis Network
The LittleSis network was extracted from the website (http://littlesis.org), which is an organization
that acts as a watchdog network to connect the dots between the world?s most powerful people
7
Paul_H_O_Neill
Roger_C_Altman
Reynold_Levy
Paul_Volcker
Ralph_L_Schlosstein
Richard_N_Cooper
William_S_Cohen
Tim_Geithner
Neal_S_Wolin
Michael_H_Moskow
William_Donaldson
Warren_Bruce_Rudman
Vin_Weber
Thomas_S_Foley
Laura_D_Tyson
Vincent_A_Mai
Maya_MacGuineas
Todd_Stern
Ronald_L_Olson
Tom_Clausen
Richard_N_Haass
Shirley_Ann_Jackson
Madeleine_K_Albright
Vernon_E_Jordan_Jr
William_McDonough Stephen_W_Bosworth
Rebecca_M_Blank
Sylvia_Mathews_Burwell
William_A_Haseltine
Robert_Gates
Tony_Fratto
Robert_D_Reischauer
Richard_L__Jake__Siewert
Peter_G_Peterson
Trenton_Arthur
Stephen_M_Wolf William_C_Rudin
Thomas_R_Pickering
Strobe_Talbott
Peter_J_Wallison
Richard_C_Holbrooke
Stanton_Anderson
Jerry_Speyer
Roger_B_Porter
Mark_B_McClellan
Richard_D_Parsons
Robert_Boorstin
Zalmay_Khalilzad
Leon_D_Black
Terry_J_Lundgren
Kenneth_M_Duberstein
Thomas_F_McLarty_III
Thomas_E_Donilon
Warren_Christopher
Scooter_Libby
Russell_L_Carson
Vikram_S_Pandit
Larry_Summers
Stephanie_Cutter
Ron_Bloom
Maurice_R_Greenberg
Martin_S_Feldstein
Thurgood_Marshall_Jr
Nancy_Killefer
John_C_Whitehead
Peter_Orszag
William_M_Lewis_Jr
Robert_E_Rubin
Stephen_Friedman
Robert_D_Hormats
Glenn_H_Hutchins
Michael_Froman
Susan_Rice
Louis_B_Susman
Robert_Wolf
Jason_Furman
Robert_Zoellick
Suresh_Sundaresan
Robert_H_Herz
John_L_Thornton
Suzanne_Nora_Johnson
Richard_Kauffman
Josh_Bolten
Peter_Bass
Thomas_R_Nides
Willem_Buiter
Kenneth_M_Jacobs
Thomas_K_Montag
William_Wicker
John_A_Thain
Penny_Pritzker
Tracy_R_Wolstencroft
Marti_Thomas
Meredith_Broome
Robert_Bauer
Mark_T_Gallogly
Peter_J_Solomon
Rajat_K_Gupta
Thomas_Steyer
Lloyd_C_Blankfein
Reuben_Jeffery_III
Michael_Strautmanis
Robert_Lane_Gibbs
Melissa_E_Winter
Richard_J_Danzig
Norm_Eisen
Michael_Lynton
Wendy_Neu
William_C_Dudley
Mario_Draghi
Robert_K_Steel
Robin_Jermyn_Brooks
Peter_K_Scaturro
Ronald_Kirk
Reed_E_Hundt
Wahid_Hamid
Steven_Gluckstern
William_Von_Hoene
William_E_Kennard
Judd_Alan_Gregg
Mark_Patterson
Thomas_J_Healey
Valerie_Jarrett
Robert_B_Menschel
Pete_Coneway
Richard_C_Perry
Timothy_M_George
Neel_Kashkari
William_P_Boardman
Tom_Bernstein
Paul_Tudor_Jones_II
Paula_H_Crown
Ron_Di_Russo
Robert_M_Perkowitz
Tony_West
Susan_Mandel
Ron_Moelis
Paul_J_TaubmanWendy_Abrams
Scott_Kapnick
Thomas_E_TuftWilliam_M_Yarbenet
Robert_J_Hurst
Walter_W_Driver_Jr
Sharmin_Mossavar_Rahmani
Robert_S_Kaplan
William_W_George
Steven_T_Mnuchin
Sarah_G_Smith
Thomas_L_Kempner_Jr
Nick_O_Donohoe
Figure 5: The LittleSis Network. Near the center in violet we have prominent government figures such as
Larry H. Summers (71st US Treasury Secretary) and Robert E. Rubin (70th US Treasury Secretary) with ties
to several distinct communities, representative of their high posterior bridgness. Conversely, within the beige
colored community, individuals with small posterior bridgness such as Wendy Neu can reflect a career that was
highly focused in one organization. A quick internet search shows that she is currently the CEO of Hugo Neu,
a green-technology firm where she has worked for over 30 years. An analysis on this type of network might
provide insights into the structures of power that shape our world and the key individuals that define them.
and organizations. Our final graph contained 18,831 nodes and 626,881 edges, which represents
a relatively sparse graph with edge density of 0.35% (for details on how this dataset was processed see ?A.3). For this analysis, we ran the aHDPR with pruning on the entire dataset using
the same settings from our previous experiments. We then took the top 200 degree nodes and generated weighted edges based off of a variational distance between their learned expected variational
|E [? ]?E [? ]|
posteriors such that dij = 1 ? q i 2 q j . This weighted edge was then included in our visualization software
[3] if dij > 0.5. Node sizes were determined by posterior bridgness [16] where
p
P
1 2
bi = 1 ? K/(K ? 1) K
k=1 (Eq [?ik ] ? K ) and measures the extent to which a node is involved
with multiple communities. Larger nodes have greater posterior bridgeness while node colors represent its dominant community membership. Our learned latent communities can drive these types
of visualizations that otherwise might not have been possible given the raw subgraph (see ?A.4).
5
Discussion
Our model represents the first Bayesian nonparametric relational model to use a stochastic variational approach for efficient inference. Our pruning moves allow us to save computation and improve inference in a principled manner while our efficient structured mean-field inference procedure
helps us escape local optima. Future extensions of interest could entail advanced split-merge moves
that can grow the number of communities as well as extending these scalable inference algorithms
to more sophisticated relational models.
8
References
[1] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. JMLR, 9, 2008.
[2] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[3] M. Bastian, S. Heymann, and M. Jacomy. Gephi: An open source software for exploring and manipulating
networks, 2009.
[4] D. M. Blei and M. I. Jordan. Variational methods for the dirichlet process. In ICML, 2004.
[5] M. Bryant and E. B. Sudderth. Truly nonparametric online variational inference for hierarchical dirichlet
processes. In NIPS, pages 2708?2716, 2012.
[6] P. Gopalan, D. M. Mimno, S. Gerrish, M. J. Freedman, and D. M. Blei. Scalable inference of overlapping
communities. In NIPS, pages 2258?2266, 2012.
[7] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Technical
Report 2005-001, Gatsby Computational Neuroscience Unit, May 2005.
[8] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. arXiv preprint
arXiv:1206.7051, 2012.
[9] M. D. Hoffman, D. M. Blei, and F. R. Bach. Online learning for latent dirichlet allocation. In NIPS, pages
856?864, 2010.
[10] C. Kemp, J. Tenenbaum, T. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an
infinite relational model. In AAAI, 2006.
[11] D. Kim, M. C. Hughes, and E. B. Sudderth. The nonparametric metadata dependent relational model. In
ICML, 2012.
[12] A. Lancichinetti and S. Fortunato. Benchmarks for testing community detection algorithms on directed
and weighted graphs with overlapping communities. Phys. Rev. E, 80(1):016118, July 2009.
[13] littlesis.org. Littlesis is a free database detailing the connections between powerful people and organizations, June 2009.
[14] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In NIPS,
2009.
[15] M. Morup, M. N. Schmidt, and L. K. Hansen. Infinite multiple membership relational modeling for complex networks. In Machine Learning for Signal Processing (MLSP), 2011 IEEE International Workshop
on, pages 1?6. IEEE, 2011.
[16] T. Nepusz, A. Petrczi, L. Ngyessy, and F. Bazs. Fuzzy communities and the concept of bridgeness in
complex networks. Phys Rev E Stat Nonlin Soft Matter Phys, 77(1 Pt 2):016107, 2008.
[17] M. Sato. Online model selection based on the variational bayes. Neural Computation, 13(7):1649?1681,
2001.
[18] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. JASA,
101(476):1566?1581, Dec. 2006.
[19] Y. W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for hdp. In NIPS, 2007.
[20] Y. Wang and G. Wong. Stochastic blockmodels for directed graphs. JASA, 82(397):8?19, 1987.
9
| 5072 |@word illustrating:1 briefly:1 open:1 crucially:1 thereby:1 initial:3 configuration:1 contains:2 score:8 zij:3 ka:13 current:1 assigning:1 must:2 realistic:1 subsequent:1 partition:3 informative:2 shape:1 interpretable:1 update:17 v:3 discovering:1 advancement:1 website:1 yamada:1 colored:1 blei:7 provides:4 node:48 org:2 unbounded:5 height:1 along:1 direct:1 beta:3 become:1 ik:22 qualitative:1 lancichinetti:1 initalized:1 manner:1 introduce:2 expected:2 daeil:1 automatically:1 considering:1 becomes:1 begin:2 underlying:3 matched:1 notation:1 mass:7 factorized:2 panel:1 lowest:1 cm:1 substantially:2 pursue:1 fuzzy:1 developed:2 every:2 act:1 growth:1 bryant:1 tie:1 stick:4 unit:1 positive:1 negligible:1 local:7 treat:1 limit:1 aiming:1 merge:1 might:2 initialization:6 examined:1 dynamically:3 conversely:1 challenging:1 limited:1 stratified:6 range:1 bi:1 directed:2 unique:1 testing:2 pgopalan:1 practice:1 assortative:8 block:2 hughes:1 procedure:4 significantly:3 adapting:3 reject:3 watchdog:1 griffith:3 protein:1 close:1 selection:3 storage:3 collapsed:1 applying:1 influence:1 wong:1 restriction:1 conventional:2 optimize:2 equivalent:2 quick:2 center:1 starting:1 independently:2 focused:1 identifying:1 insight:1 k20:1 coordinate:1 analogous:1 updated:3 pt:1 suppose:1 us:3 associate:5 expensive:1 jk:3 updating:1 showcase:1 database:2 observed:18 bottom:1 preprint:1 rij:10 capture:3 initializing:1 thousand:2 calculate:2 wang:2 connected:3 highest:1 removed:1 ran:1 principled:1 complexity:1 depend:2 predictive:3 easily:2 joint:2 cat:3 represented:1 various:2 derivation:3 distinct:4 aggregate:1 firm:1 larger:1 elbo:11 otherwise:2 amari:1 ability:1 statistic:2 beige:1 noisy:3 final:5 online:8 beal:1 took:1 propose:2 interaction:4 sudderth1:1 relevant:4 subgraph:2 achieve:2 exploiting:1 convergence:2 cluster:1 requirement:2 optimum:5 extending:1 produce:2 help:1 derive:3 develop:3 coupling:1 stat:1 ij:10 finitely:1 ibp:2 eq:20 c:2 direction:1 stochastic:14 kb:8 larry:1 qold:2 adjacency:2 require:1 government:2 assign:1 fix:1 generalization:3 clustered:1 probable:1 ammsb:34 summation:2 yij:29 extension:2 exploring:1 practically:1 considered:2 exp:3 visualize:1 consecutive:1 purpose:1 applicable:1 condensed:2 currently:1 hansen:1 robbins:2 largest:1 create:2 weighted:3 hoffman:2 aim:1 modified:1 rather:1 i3:2 caching:1 pn:4 varying:1 derived:1 focus:4 june:1 vk:9 improvement:3 she:2 likelihood:3 indicates:1 contrast:1 blockmodel:1 baseline:1 kim:1 inference:27 secretary:2 dependent:4 membership:18 inaccurate:1 entire:1 accept:3 hidden:1 manipulating:1 selects:1 provably:1 overall:1 among:2 flexible:2 extraneous:1 art:1 special:1 constrained:1 initialize:1 marginal:1 field:12 equal:1 once:1 sampling:7 represents:2 broad:1 look:1 icml:2 future:1 others:2 report:1 escape:1 employ:1 randomly:1 divergence:1 individual:2 replaced:1 maintain:1 detection:1 organization:4 interest:1 highly:2 severe:1 analyzed:1 truly:1 redistribute:1 edge:31 encourage:1 necessary:1 old:7 irm:1 detailing:1 dae:1 instance:1 modeling:2 earlier:1 soft:1 hep:3 altering:1 assignment:15 cost:3 violet:1 subset:5 uniform:1 usefulness:1 dij:2 too:1 characterize:1 connect:1 varies:1 dir:1 accomplish:1 synthetic:8 chooses:1 referring:1 combined:1 st:1 density:1 sensitivity:1 international:1 off:1 quickly:1 aaai:1 reflect:2 satisfied:1 containing:1 choose:1 toy:6 suggesting:1 potential:2 summarized:1 wk:17 mlsp:1 matter:3 explicitly:1 depends:1 later:1 view:1 linked:1 analyze:1 red:1 portion:1 recover:1 maintains:1 xing:1 bayes:1 monro:2 contribution:1 minimize:1 il:1 ass:1 accuracy:5 variance:1 who:2 efficiently:1 miller:1 identify:1 bayesian:2 raw:2 accurately:1 drive:1 bridgeness:2 phys:3 neu:2 frequency:4 involved:1 associated:6 sampled:1 gain:1 dataset:5 color:1 improves:1 sophisticated:1 focusing:1 higher:1 originally:1 specify:4 improved:3 izi:1 evaluated:1 box:1 strongly:1 furthermore:1 expressive:1 overlapping:5 incrementally:1 minibatch:1 logistic:1 hypothesized:1 brown:2 true:7 unbiased:2 remedy:1 contain:1 concept:2 assigned:1 symmetric:1 iteratively:2 conditionally:1 during:1 self:2 auc:13 please:1 criterion:1 generalized:1 prominent:1 demonstrate:3 performs:1 ranging:1 variational:36 novel:1 functional:1 hugo:1 linking:1 significant:5 blocked:1 paisley:1 outlined:1 similarly:2 dot:1 entail:1 similarity:1 longer:1 base:1 dominant:1 posterior:10 own:2 recent:1 showed:1 driven:1 perplexity:28 relativity:7 binary:1 success:1 continue:1 scoring:1 seen:1 minimum:1 additional:2 care:1 greater:2 employed:1 prune:7 determine:2 maximize:1 converge:1 july:1 signal:1 multiple:6 full:2 technical:2 downweights:1 calculation:1 long:1 bach:1 equally:1 prediction:4 scalable:6 expectation:4 metric:2 arxiv:2 iteration:4 normalization:1 represent:3 achieved:1 dec:1 else:1 sudderth:3 grow:2 source:2 unlike:1 ascent:1 undirected:5 member:2 nonlin:1 jordan:3 near:1 presence:1 unused:2 split:2 enough:1 variety:1 zi:2 restrict:1 reduce:2 regarding:1 computable:1 whether:1 motivated:1 detailed:2 aimed:1 gopalan:1 nonparametric:11 ten:2 tenenbaum:1 induces:1 processed:1 simplest:1 generate:3 http:1 restrictively:1 governmental:1 neuroscience:1 per:2 wendy:1 broadly:1 key:2 four:1 drawn:1 changing:1 graph:11 sum:1 year:1 run:1 parameterized:1 powerful:2 place:1 family:4 extends:1 reasonable:1 ueda:1 parsimonious:1 redistributing:1 scaling:3 layer:1 bound:10 internet:1 summer:1 bastian:1 sato:1 adapted:1 worked:1 constrain:1 software:3 speed:1 pruned:6 performing:2 expanded:1 relatively:1 department:2 structured:9 combination:1 poor:1 disconnected:1 conjugate:2 across:4 partitioned:1 rev:2 modification:2 untruncated:1 explained:1 restricted:8 sij:10 intuitively:1 taken:1 fienberg:1 kim1:1 computationally:2 visualization:3 previously:3 needed:1 know:1 letting:1 demographic:1 operation:3 hierarchical:6 appropriate:1 save:1 alternative:3 buffet:2 batch:10 robustness:1 schmidt:1 original:1 denotes:1 dirichlet:11 clustering:1 ensure:2 remaining:1 graphical:2 include:1 maintaining:1 top:2 ceo:1 exploit:1 ghahramani:1 build:1 approximating:1 objective:5 move:12 occurs:1 parametric:1 concentration:1 dependence:1 strategy:3 costly:1 gradient:11 dp:3 distance:1 link:8 separate:1 entity:1 participate:1 topic:3 astro:1 extent:1 kemp:1 erik:1 hdp:5 relationship:5 mini:7 innovation:1 equivalently:1 robert:1 fortunato:1 astrophysics:2 implementation:1 perform:4 teh:2 upper:3 observation:11 datasets:3 vkt:2 finite:1 benchmark:1 defining:4 relational:16 discovered:2 varied:1 arbitrary:1 community:91 david:1 pair:8 required:1 kl:1 connection:4 quadratically:1 learned:4 tractably:1 nip:5 sparsity:2 interpretability:1 green:1 greatest:1 meanfield:1 power:2 business:2 natural:5 participation:1 indicator:2 advanced:1 scheme:1 improve:2 technology:1 axis:1 categorical:2 metadata:3 naive:15 prior:4 discovery:1 removal:1 determining:1 law:1 expect:1 mixed:6 allocation:1 versus:1 degree:4 jasa:2 tka:1 sufficient:2 consistent:1 rubin:1 zeroed:1 collaboration:4 row:1 accounted:1 truncation:8 free:1 allow:5 wide:1 taking:1 sparse:4 benefit:4 mimno:1 dimension:1 calculated:1 world:10 avoids:1 valid:1 depth:1 collection:3 social:4 welling:1 pruning:35 mmsb:4 approximate:4 global:12 receiver:1 yji:1 subsequence:1 latent:17 un:1 search:1 heymann:1 learn:1 career:1 init:10 improving:2 complex:2 pk:11 blockmodels:2 linearly:1 hyperparameters:1 freedman:1 fig:4 representative:1 wherep:1 quantiles:4 fashion:1 gatsby:1 precision:1 sub:2 exponential:1 candidate:3 pe:2 breaking:4 jmlr:1 externally:1 rk:1 removing:1 friendship:1 treasury:2 specific:3 evidence:1 intractable:1 workshop:1 rel:1 effectively:1 airoldi:1 entropy:1 eij:10 contained:1 partially:1 nested:5 truth:1 gerrish:1 extracted:2 sized:1 kmeans:4 shared:1 change:2 included:1 infinite:6 determined:1 uniformly:3 reducing:1 kurihara:1 ijk:9 select:3 people:2 prem:1 indian:2 evaluate:1 mcmc:1 princeton:2 phenomenon:1 |
4,501 | 5,073 | Learning with Noisy Labels
Nagarajan Natarajan
Inderjit S. Dhillon
Pradeep Ravikumar
Department of Computer Science, University of Texas, Austin.
{naga86,inderjit,pradeepr}@cs.utexas.edu
Ambuj Tewari
Department of Statistics, University of Michigan, Ann Arbor.
[email protected]
Abstract
In this paper, we theoretically study the problem of binary classification in the
presence of random classification noise ? the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability.
Moreover, random label noise is class-conditional ? the flip probability depends
on the class. We provide two approaches to suitably modify any given surrogate
loss function. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical risk minimization in the presence of iid
data with noisy labels. If the loss function satisfies a simple symmetry condition,
we show that the method leads to an efficient algorithm for empirical minimization. Second, by leveraging a reduction of risk minimization under noisy labels
to classification with weighted 0-1 loss, we suggest the use of a simple weighted
surrogate loss, for which we are able to obtain strong empirical risk bounds. This
approach has a very remarkable consequence ? methods used in practice such
as biased SVM and weighted logistic regression are provably noise-tolerant. On
a synthetic non-separable dataset, our methods achieve over 88% accuracy even
when 40% of the labels are corrupted, and are competitive with respect to recently
proposed methods for dealing with label noise in several benchmark datasets.
1 Introduction
Designing supervised learning algorithms that can learn from data sets with noisy labels is a problem
of great practical importance. Here, by noisy labels, we refer to the setting where an adversary has
deliberately corrupted the labels [Biggio et al., 2011], which otherwise arise from some ?clean?
distribution; learning from only positive and unlabeled data [Elkan and Noto, 2008] can also be cast
in this setting. Given the importance of learning from such noisy labels, a great deal of practical
work has been done on the problem (see, for instance, the survey article by Nettleton et al. [2010]).
The theoretical machine learning community has also investigated the problem of learning from
noisy labels. Soon after the introduction of the noise-free PAC model, Angluin and Laird [1988]
proposed the random classification noise (RCN) model where each label is flipped independently
with some probability ? ? [0, 1/2). It is known [Aslam and Decatur, 1996, Cesa-Bianchi et al.,
1999] that finiteness of the VC dimension characterizes learnability in the RCN model. Similarly, in
the online mistake bound model, the parameter that characterizes learnability without noise ? the
Littestone dimension ? continues to characterize learnability even in the presence of random label
noise [Ben-David et al., 2009]. These results are for the so-called ?0-1? loss. Learning with convex
losses has been addressed only under limiting assumptions like separability or uniform noise rates
[Manwani and Sastry, 2013].
In this paper, we consider risk minimization in the presence of class-conditional random label noise
(abbreviated CCN). The data consists of iid samples from an underlying ?clean? distribution D.
The learning algorithm sees samples drawn from a noisy version D? of D ? where the noise rates
depend on the class label. To the best of our knowledge, general results in this setting have not been
obtained before. To this end, we develop two methods for suitably modifying any given surrogate
loss function ?, and show that minimizing the sample average of the modified proxy loss function
1
?? leads to provable risk bounds where the risk is calculated using the original loss ? on the clean
distribution.
In our first approach, the modified or proxy loss is an unbiased estimate of the loss function. The
idea of using unbiased estimators is well-known in stochastic optimization [Nemirovski et al., 2009],
and regret bounds can be obtained for learning with noisy labels in an online learning setting (See
Appendix B). Nonetheless, we bring out some important aspects of using unbiased estimators of
loss functions for empirical risk minimization under CCN. In particular, we give a simple symmetry
condition on the loss (enjoyed, for instance, by the Huber, logistic, and squared losses) to ensure that
the proxy loss is also convex. Hinge loss does not satisfy the symmetry condition, and thus leads
to a non-convex problem. We nonetheless provide a convex surrogate, leveraging the fact that the
non-convex hinge problem is ?close? to a convex problem (Theorem 6).
Our second approach is based on the fundamental observation that the minimizer of the risk (i.e.
probability of misclassification) under the noisy distribution differs from that of the clean distribution only in where it thresholds ?(x) = P (Y = 1|x) to decide the label. In order to correct for the
threshold, we then propose a simple weighted loss function, where the weights are label-dependent,
as the proxy loss function. Our analysis builds on the notion of consistency of weighted loss functions studied by Scott [2012]. This approach leads to a very remarkable result that appropriately
weighted losses like biased SVMs studied by Liu et al. [2003] are robust to CCN.
The main results and the contributions of the paper are summarized below:
1. To the best of our knowledge, we are the first to provide guarantees for risk minimization under
random label noise in the general setting of convex surrogates, without any assumptions on the
true distribution.
2. We provide two different approaches to suitably modifying any given surrogate loss function,
that surprisingly lead to very similar risk bounds (Theorems 3 and 11). These general results
include some existing results for random classification noise as special cases.
3. We resolve an elusive theoretical gap in the understanding of practical methods like biased SVM
and weighted logistic regression ? they are provably noise-tolerant (Theorem 11).
4. Our proxy losses are easy to compute ? both the methods yield efficient algorithms.
5. Experiments on benchmark datasets show that the methods are robust even at high noise rates.
The outline of the paper is as follows. We introduce the problem setting and terminology in Section
2. In Section 3, we give our first main result concerning the method of unbiased estimators. In
Section 4, we give our second and third main results for certain weighted loss functions. We present
experimental results on synthetic and benchmark data sets in Section 5.
1.1 Related Work
Starting from the work of Bylander [1994], many noise tolerant versions of the perceptron algorithm
have been developed. This includes the passive-aggressive family of algorithms [Crammer et al.,
2006], confidence weighted learning [Dredze et al., 2008], AROW [Crammer et al., 2009] and the
NHERD algorithm [Crammer and Lee, 2010]. The survey article by Khardon and Wachman [2007]
provides an overview of some of this literature. A Bayesian approach to the problem of noisy labels
is taken by Graepel and Herbrich [2000] and Lawrence and Sch?olkopf [2001]. As Adaboost is very
sensitive to label noise, random label noise has also been considered in the context of boosting. Long
and Servedio [2010] prove that any method based on a convex potential is inherently ill-suited to
random label noise. Freund [2009] proposes a boosting algorithm based on a non-convex potential
that is empirically seen to be robust against random label noise.
Stempfel and Ralaivola [2009] proposed the minimization of an unbiased proxy for the case of
the hinge loss. However the hinge loss leads to a non-convex problem. Therefore, they proposed
heuristic minimization approaches for which no theoretical guarantees were provided (We address
the issue in Section 3.1). Cesa-Bianchi et al. [2011] focus on the online learning algorithms where
they only need unbiased estimates of the gradient of the loss to provide guarantees for learning with
noisy data. However, they consider a much harder noise model where instances as well as labels
are noisy. Because of the harder noise model, they necessarily require multiple noisy copies per
clean example and the unbiased estimation schemes also become fairly complicated. In particular,
their techniques break down for non-smooth losses such as the hinge loss. In contrast, we show
that unbiased estimation is always possible in the more benign random classification noise setting.
Manwani and Sastry [2013] consider whether empirical risk minimization of the loss itself on the
2
noisy data is a good idea when the goal is to obtain small risk under the clean distribution. But
it holds promise only for 0-1 and squared losses. Therefore, if empirical risk minimization over
noisy samples has to work, we necessarily have to change the loss used to calculate the empirical
risk. More recently, Scott et al. [2013] study the problem of classification under class-conditional
noise model. However, they approach the problem from a different set of assumptions ? the noise
rates are not known, and the true distribution satisfies a certain ?mutual irreducibility? property.
Furthermore, they do not give any efficient algorithm for the problem.
2 Problem Setup and Background
Let D be the underlying true distribution generating (X, Y ) ? X ? {?1} pairs from which n iid
samples (X1 , Y1 ), . . . , (Xn , Yn ) are drawn. After injecting random classification noise (independently for each i) into these samples, corrupted samples (X1 , Y?1 ), . . . , (Xn , Y?n ) are obtained. The
class-conditional random noise model (CCN, for short) is given by:
P (Y? = ?1|Y = +1) = ?+1 , P (Y? = +1|Y = ?1) = ??1 , and ?+1 + ??1 < 1
The corrupted samples are what the learning algorithm sees. We will assume that the noise rates
?+1 and ??1 are known1 to the learner. Let the distribution of (X, Y? ) be D? . Instances are denoted
by x ? X ? Rd . Noisy labels are denoted by y?.
Let f : X ? R be some
function. The risk of f w.r.t. the 0-1 loss is given by
real-valued decision
RD (f ) = E(X,Y )?D 1{sign(f (X))6=Y } . The optimal decision function (called Bayes optimal) that
minimizes RD over all real-valued decision functions is given by f ? (x) = sign(?(x) ? 1/2) where
?(x) = P (Y = 1|x). We denote by R? the corresponding Bayes risk under the clean distribution
D, i.e. R? = RD (f ? ). Let ?(t, y) denote a loss function where t ? R is a real-valued prediction and
? y?) denote a suitably modified ? for use with noisy labels (obtained using
y ? {?1} is a label. Let ?(t,
methods in Sections 3 and 4). It is helpful to summarize the three important quantities associated
with a decision function f :
?
? (Xi ), Y?i ).
b ?(f ) := 1 Pn ?(f
1. Empirical ?-risk
on the observed sample: R
i=1
?
n
?
b ?(f ) to be close to the ?-risk
2. As n grows, we expect R
under the noisy distribution D? :
?
h
i
?
?
R?,D
? ? (f ) := E(X,Y? )?D? ?(f (X), Y ) .
3. ?-risk under the ?clean? distribution D: R?,D (f ) := E(X,Y )?D [?(f (X), Y )].
Typically, ? is a convex function that is calibrated with respect to an underlying loss function such as
the 0-1 loss. ? is said to be classification-calibrated [Bartlett et al., 2006] if and only if there exists a
convex, invertible, nondecreasing transformation ?? (with ?? (0) = 0) such that ?? (RD (f ) ? R? ) ?
R?,D (f )?minf R?,D (f ). The interpretation is that we can control the excess 0-1 risk by controlling
the excess ?-risk.
If f is not quantified in a minimization, then it is implicit that the minimization is over all measurable
functions. Though most of our results apply to a general function class F , we instantiate F to be the
set of hyperplanes of bounded L2 norm, W = {w ? Rd : kwk2 ? W2 } for certain specific results.
Proofs are provided in the Appendix A.
3 Method of Unbiased Estimators
Let F : X ? R be a fixed class of real-valued decision functions, over which the empirical risk is
minimized. The method of unbiased estimators uses the noise rates to construct an unbiased estima? y?) for the loss ?(t, y). However, in the experiments we will tune the noise rate parameters
tor ?(t,
through cross-validation. The following key lemma tells us how to construct unbiased estimators of
the loss from noisy labels.
Lemma 1. Let ?(t, y) be any bounded loss function. Then, if we define,
we have, for any t, y,
1
? y) := (1 ? ??y ) ?(t, y) ? ?y ?(t, ?y)
?(t,
1 ? ?+1 ? ??1
h
i
? y?) = ?(t, y) .
Ey? ?(t,
This is not necessary in practice. See Section 5.
3
We can try to learn a good predictor in the presence of label noise by minimizing the sample average
b ?(f ) .
f? ? argmin R
f ?F
?
By unbiasedness of ?? (Lemma 1), we know that, for any fixed f ? F , the above sample average
converges to R?,D (f ) even though the former is computed using noisy labels whereas the latter
depends on the true labels. The following result gives a performance guarantee for this procedure in
terms of the Rademacher complexity of the function class F . The main idea in the proof is to use
the contraction principle for Rademacher complexity to get rid of the dependence on the proxy loss
? The price to pay for this is L? , the Lipschitz constant of ?.
?
?.
Lemma 2. Let ?(t, y) be L-Lipschitz in t (for every y). Then, with probability at least 1 ? ?,
r
log(1/?)
b
max |R??(f ) ? R?,D
? ? (f )| ? 2L? R(F ) +
f ?F
2n
where R(F ) := EXi ,?i supf ?F n1 ?i f (Xi ) is the Rademacher complexity of the function class F
? Note that ?i ?s are iid Rademacher
and L? ? 2L/(1 ? ?+1 ? ??1 ) is the Lipschitz constant of ?.
(symmetric Bernoulli) random variables.
The above lemma immediately leads to a performance bound for f? with respect to the clean distribution D. Our first main result is stated in the theorem below.
Theorem 3 (Main Result 1). With probability at least 1 ? ?,
r
log(1/?)
?
.
R?,D (f ) ? min R?,D (f ) + 4L? R(F ) + 2
f ?F
2n
Furthermore, if ? is classification-calibrated, there exists a nondecreasing function ?? with ?? (0) = 0
such that,
r
log(1/?)
?
?
RD (f ) ? R ? ?? min R?,D (f ) ? min R?,D (f ) + 4L? R(F ) + 2
.
f ?F
f
2n
The term on the right hand side involves both approximation error (that is small if F is large) and
estimation error (that is small if F is small). However, by appropriately increasing the richness of
the class F with sample size, we can ensure that the misclassification probability of f? approaches
the Bayes risk of the true distribution. This is despite the fact that the method of unbiased estimators
computes the empirical minimizer f? on a sample from the noisy distribution. Getting the optimal
?
empirical minimizer f? is efficient if ?? is convex. Next, we address the issue of convexity of ?.
3.1 Convex losses and their estimators
Note that the loss ?? may not be convex even if we start with a convex ?. An example is provided
by the familiar hinge loss ?hin (t, y) = [1 ? yt]+ . Stempfel and Ralaivola [2009] showed that ??hin is
not convex in general (of course, when ?+1 = ??1 = 0, it is convex). Below we provide a simple
?
condition to ensure convexity of ?.
Lemma 4. Suppose ?(t, y) is convex and twice differentiable almost everywhere in t (for every y)
and also satisfies the symmetry property
? y) is also convex in t.
Then ?(t,
?t ? R, ??? (t, y) = ??? (t, ?y) .
Examples satisfying the conditions of the lemma above are the squared loss ?sq (t, y) = (t ? y)2 , the
logistic loss ?log (t, y) = log(1 + exp(?ty)) and the Huber loss:
?
if yt < ?1
??4yt
?Hub (t, y) = (t ? y)2 if ? 1 ? yt ? 1
?
0
if yt > 1
Consider the case where ?? turns out to be non-convex when ? is convex, as in ??hin . In the online
learning setting (where the adversary chooses a sequence of examples, and the prediction of a learner
at round i is based on the history of i ? 1 examples with independently flipped labels), we could
use a stochastic mirror descent type algorithm [Nemirovski et al., 2009] to arrive at risk bounds (See
Appendix B) similar to Theorem 3. Then, we only need the expected loss to be convex and therefore
4
?hin does not present a problem. At first blush, it may appear that we do not have much hope of
obtaining f? in the iid setting efficiently. However, Lemma 2 provides a clue.
b ?(w) is non-convex, it
We will now focus on the function class W of hyperplanes. Even though R
?
b
is uniformly close to R?,D
? ? (w) = R?,D (w), this shows that R??(w) is uniformly
? ? (w). Since R?,D
close to a convex function over w ? W. The following result shows that we can therefore approxb ?(w) by minimizing the biconjugate F ?? . Recall that the (Fenchel)
imately minimize F (w) = R
?
??
biconjugate F is the largest convex function that minorizes F .
Lemma 5. Let F : W ? R be a non-convex function defined on function class W such it is ?-close
to a convex function G : W ? R:
?w ? W, |F (w) ? G(w)| ? ?
Then any minimizer of F ?? is a 2?-approximate (global) minimizer of F .
Now, the following theorem establishes bounds for the case when ?? is non-convex, via the solution
obtained by minimizing the convex function F ?? .
Theorem 6. Let ? be a loss, such as the hinge loss, for which ?? is non-convex. Let W = {w :
? approx be any (exact) minimizer of the
kw2 k ? W2 }, let kXi k2 ? X2 almost surely, and let w
convex problem
min F ?? (w) ,
w?W
b ?(w). Then, with probability
where F ?? (w) is the (Fenchel) biconjugate of the function F (w) = R
?
b
? approx is a 2?-minimizer of R??(?) where
at least 1 ? ?, w
r
2L? X2 W2
log(1/?)
?
?=
+
.
n
2n
Therefore, with probability at least 1 ? ?,
? approx ) ? min R?,D (w) + 4? .
R?,D (w
w?W
Numerical or symbolic computation of the biconjugate of a multidimensional function is difficult,
in general, but can be done in special cases. It will be interesting to see if techniques from Computational Convex Analysis [Lucet, 2010] can be used to efficiently compute the biconjugate above.
4 Method of label-dependent costs
We develop the method of label-dependent costs from two key observations. First, the Bayes classifier for noisy distribution, denoted f?? , for the case ?+1 6= ??1 , simply uses a threshold different
from 1/2. Second, f?? is the minimizer of a ?label-dependent 0-1 loss? on the noisy distribution. The
framework we develop here generalizes known results for the uniform noise rate setting ?+1 = ??1
and offers a more fundamental insight into the problem. The first observation is formalized in the
lemma below.
Lemma 7. Denote P (Y = 1|X) by ?(X) and P(Y? = 1|X) by ??(X). The Bayes classifier under
the noisy distribution, f?? = argminf E(X,Y? )?D? 1{sign(f (X))6=Y? } is given by,
1/2 ? ??1
.
f?? (x) = sign(?
? (x) ? 1/2) = sign ?(x) ?
1 ? ?+1 ? ??1
Interestingly, this ?noisy? Bayes classifier can also be obtained as the minimizer of a weighted 0-1
loss; which as we will show, allows us to ?correct? for the threshold under the noisy distribution.
Let us first introduce the notion of ?label-dependent? costs for binary classification. We can write
the 0-1 loss as a label-dependent loss as follows:
1{sign(f (X))6=Y } = 1{Y =1} 1{f (X)?0} + 1{Y =?1} 1{f (X)>0}
We realize that the classical 0-1 loss is unweighted. Now, we could consider an ?-weighted version
of the 0-1 loss as:
U? (t, y) = (1 ? ?)1{y=1} 1{t?0} + ?1{y=?1} 1{t>0} ,
where ? ? (0, 1). In fact we see that minimization w.r.t. the 0-1 loss is equivalent to that w.r.t.
U1/2 (f (X), Y ). It is not a coincidence that Bayes optimal f ? has a threshold 1/2. The following
lemma [Scott, 2012] shows that in fact for any ?-weighted 0-1 loss, the minimizer thresholds ?(x)
at ?.
5
Lemma 8 (?-weighted Bayes optimal [Scott, 2012]). Define U? -risk under distribution D as
R?,D (f ) = E(X,Y )?D [U? (f (X), Y )].
Then, f?? (x) = sign(?(x) ? ?) minimizes U? -risk.
Now consider the risk of f w.r.t. the ?-weighted 0-1 loss under noisy distribution D? :
R?,D? (f ) = E(X,Y? )?D? U? (f (X), Y? ) .
At this juncture, we are interested in the following question: Does there exist an ? ? (0, 1) such
that the minimizer of U? -risk under noisy distribution D? has the same sign as that of the Bayes
optimal f ? ? We now present our second main result in the following theorem that makes a stronger
statement ? the U? -risk under noisy distribution D? is linearly related to the 0-1 risk under the
clean distribution D. The corollary of the theorem answers the question in the affirmative.
Theorem 9 (Main Result 2). For the choices,
1 ? ?+1 + ??1
1 ? ?+1 ? ??1
?? =
and A? =
,
2
2
there exists a constant BX that is independent of f such that, for all functions f ,
R?? ,D? (f ) = A? RD (f ) + BX .
?
Corollary 10. The ? -weighted Bayes optimal classifier under noisy distribution coincides with
that of 0-1 loss under clean distribution:
argmin R?? ,D? (f ) = argmin RD (f ) = sign(?(x) ? 1/2).
f
f
4.1 Proposed Proxy Surrogate Losses
Consider any surrogate loss function ?; and the following decomposition:
?(t, y) = 1{y=1} ?1 (t) + 1{y=?1} ??1 (t)
where ?1 and ??1 are partial losses of ?. Analogous to the 0-1 loss case, we can define ?-weighted
loss function (Eqn. (1)) and the corresponding ?-weighted ?-risk. Can we hope to minimize an ?weighted ?-risk with respect to noisy distribution D? and yet bound the excess 0-1 risk with respect
to the clean distribution D? Indeed, the ?? specified in Theorem 9 is precisely what we need. We are
ready to state our third main result, which relies on a generalized notion of classification calibration
for ?-weighted losses [Scott, 2012]:
Theorem 11 (Main Result 3). Consider the empirical risk minimization problem with noisy labels:
n
1X
?? (f (Xi ), Y?i ).
f?? = argmin
n i=1
f ?F
Define ?? as an ?-weighted margin loss function of the form:
?? (t, y) = (1 ? ?)1{y=1} ?(t) + ?1{y=?1} ?(?t)
(1)
where ? : R ? [0, ?) is a convex loss function with Lipschitz constant L such that it is classification?
calibrated (i.e. ? (0) < 0). Then, for the choices ?? and A? in Theorem 9, there exists a nondecreasing function ???? with ???? (0) = 0, such that the following bound holds with probability at
least 1 ? ?:
r
log(1/?)
?
?1
?
RD (f?? ) ? R ? A? ???? min R?? ,D? (f ) ? min R?? ,D? (f ) + 4LR(F ) + 2
.
f
f ?F
2n
Aside from bounding excess 0-1 risk under the clean distribution, the importance of the above theorem lies in the fact that it prescribes an efficient algorithm for empirical minimization with noisy
labels: ?? is convex if ? is convex. Thus for any surrogate loss function including ?hin , f??? can be
efficiently computed using the method of label-dependent costs. Note that the choice of ?? above
is quite intuitive. For instance, when ??1 ? ?+1 (this occurs in settings such as Liu et al. [2003]
where there are only positive and unlabeled examples), ?? < 1 ? ?? and therefore mistakes on
positives are penalized more than those on negatives. This makes intuitive sense since an observed
negative may well have been a positive but the other way around is unlikely. In practice we do not
need to know ?? , i.e. the noise rates ?+1 and ??1 . The optimization problem involves just one
parameter that can be tuned by cross-validation (See Section 5).
6
5 Experiments
We show the robustness of the proposed algorithms to increasing rates of label noise on synthetic and
real-world datasets. We compare the performance of the two proposed methods with state-of-the-art
methods for dealing with random classification noise. We divide each dataset (randomly) into 3
training and test sets. We use a cross-validation set to tune the parameters specific to the algorithms.
Accuracy of a classification algorithm is defined as the fraction of examples in the test set classified
correctly with respect to the clean distribution. For given noise rates ?+1 and ??1 , labels of the
training data are flipped accordingly and average accuracy over 3 train-test splits is computed2. For
evaluation, we choose a representative algorithm based on each of the two proposed methods ? ??log
for the method of unbiased estimators and the widely-used C-SVM [Liu et al., 2003] method (which
applies different costs on positives and negatives) for the method of label-dependent costs.
5.1 Synthetic data
First, we use the synthetic 2D linearly separable dataset shown in Figure 1(a). We observe from
experiments that our methods achieve over 90% accuracy even when ?+1 = ??1 = 0.4. Figure 1
shows the performance of ??log on the dataset for different noise rates. Next, we use a 2D UCI
benchmark non-separable dataset (?banana?). The dataset and classification results using C-SVM
(in fact, for uniform noise rates, ?? = 1/2, so it is just the regular SVM) are shown in Figure 2. The
results for higher noise rates are impressive as observed from Figures 2(d) and 2(e). The ?banana?
dataset has been used in previous research on classification with noisy labels. In particular, the
Random Projection classifier [Stempfel and Ralaivola, 2007] that learns a kernel perceptron in the
presence of noisy labels achieves about 84% accuracy at ?+1 = ??1 = 0.3 as observed from
our experiments (as well as shown by Stempfel and Ralaivola [2007]), and the random hyperplane
sampling method [Stempfel et al., 2007] gets about the same accuracy at (?+1 , ??1 ) = (0.2, 0.4) (as
reported by Stempfel et al. [2007]). Contrast these with C-SVM that achieves about 90% accuracy
at ?+1 = ??1 = 0.2 and over 88% accuracy at ?+1 = ??1 = 0.4.
100
100
100
100
100
80
80
80
80
80
60
60
60
60
60
40
40
40
40
40
20
20
20
20
0
0
0
0
0
?20
?20
?20
?20
?20
?40
?40
?40
?40
?40
?60
?60
?60
?80
?80
?100
?100
?80
?60
?40
?20
0
20
40
60
80
100
?100
?100
?60
?80
?80
?60
?40
(a)
?20
0
20
40
60
80
100
?100
?100
20
?60
?80
?80
?60
?40
(b)
?20
0
20
40
60
80
100
?100
?100
?80
?80
?60
?40
(c)
?20
0
20
40
60
80
100
?100
?100
?80
?60
?40
(d)
?20
0
20
40
60
80
100
(e)
Figure 1: Classification of linearly separable synthetic data set using ??log . The noise-free data is
shown in the leftmost panel. Plots (b) and (c) show training data corrupted with noise rates (?+1 =
??1 = ?) 0.2 and 0.4 respectively. Plots (d) and (e) show the corresponding classification results.
The algorithm achieves 98.5% accuracy even at 0.4 noise rate per class. (Best viewed in color).
4
4
4
4
4
3
3
3
3
3
2
2
2
2
2
1
1
1
1
1
0
0
0
0
0
?1
?1
?1
?1
?1
?2
?2
?2
?2
?2
?3
?4
?3
?2
?1
(a)
0
1
2
3
?3
?4
?3
?2
?1
(b)
0
1
2
3
?3
?4
?3
?2
?1
0
(c)
1
2
3
?3
?4
?3
?2
?1
(d)
0
1
2
3
?3
?4
?3
?2
?1
0
1
2
3
(e)
Figure 2: Classification of ?banana? data set using C-SVM. The noise-free data is shown in (a). Plots
(b) and (c) show training data corrupted with noise rates (?+1 = ??1 = ?) 0.2 and 0.4 respectively.
Note that for ?+1 = ??1 , ?? = 1/2 (i.e. C-SVM reduces to regular SVM). Plots (d) and (e) show
the corresponding classification results (Accuracies are 90.6% and 88.5% respectively). Even when
40% of the labels are corrupted (?+1 = ??1 = 0.4), the algorithm recovers the class structures as
observed from plot (e). Note that the accuracy of the method at ? = 0 is 90.8%.
5.2 Comparison with state-of-the-art methods on UCI benchmark
We compare our methods with three state-of-the-art methods for dealing with random classification noise: Random Projection (RP) classifier [Stempfel and Ralaivola, 2007]), NHERD
2
Note that training and cross-validation are done on the noisy training data in our setting. To account for
randomness in the flips to simulate a given noise rate, we repeat each experiment 3 times ? independent
corruptions of the data set for same setting of ?+1 and ??1 , and present the mean accuracy over the trials.
7
DATASET (d, n+ , n? )
Breast cancer
(9, 77, 186)
Diabetes
(8, 268, 500)
Thyroid
(5, 65, 150)
German
(20, 300, 700)
Heart
(13, 120, 150)
Image
(18, 1188, 898)
Noise rates
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
?+1 = ??1 = 0.2
?+1 = 0.3, ??1 = 0.1
?+1 = ??1 = 0.4
??log
70.12
70.07
67.79
76.04
75.52
65.89
87.80
80.34
83.10
71.80
71.40
67.19
82.96
84.44
57.04
82.45
82.55
63.47
C-SVM
67.85
67.81
67.79
66.41
66.41
65.89
94.31
92.46
66.32
68.40
68.40
68.40
61.48
57.04
54.81
91.95
89.26
63.47
PAM
69.34
67.79
67.05
69.53
65.89
65.36
96.22
86.85
70.98
63.80
67.80
67.80
69.63
62.22
53.33
92.90
89.55
73.15
NHERD
64.90
65.68
56.50
73.18
74.74
71.09
78.49
87.78
85.95
67.80
67.80
54.80
82.96
81.48
52.59
77.76
79.39
69.61
RP
69.38
66.28
54.19
75.00
67.71
62.76
84.02
83.12
57.96
62.80
67.40
59.79
72.84
79.26
68.15
65.29
70.66
64.72
Table 1: Comparative study of classification algorithms on UCI benchmark datasets. Entries within
1% from the best in each row are in bold. All the methods except NHERD variants (which
are not kernelizable) use Gaussian kernel with width 1. All method-specific parameters are estimated through cross-validation. Proposed methods (??log and C-SVM) are competitive across all the
datasets. We show the best performing NHERD variant (?project? and ?exact?) in each case.
[Crammer and Lee, 2010]) (project and exact variants3 ), and perceptron algorithm with margin (PAM) which was shown to be robust to label noise by Khardon and Wachman [2007].
We use the standard UCI classification datasets, preprocessed and made available by Gunnar
R?atsch(http://theoval.cmp.uea.ac.uk/matlab). For kernelized algorithms, we use
Gaussian kernel with width set to the best width obtained by tuning it for a traditional SVM on
the noise-free data. For ??log , we use ?+1 and ??1 that give the best accuracy in cross-validation. For
C-SVM, we fix one of the weights to 1, and tune the other. Table 1 shows the performance of the
methods for different settings of noise rates. C-SVM is competitive in 4 out of 6 datasets (Breast
cancer, Thyroid, German and Image), while relatively poorer in the other two. On the other hand,
??log is competitive in all the data sets, and performs the best more often. When about 20% labels are
corrupted, uniform (?+1 = ??1 = 0.2) and non-uniform cases (?+1 = 0.3, ??1 = 0.1) have similar
accuracies in all the data sets, for both C-SVM and ??log . Overall, we observe that the proposed
methods are competitive and are able to tolerate moderate to high amounts of label noise in the data.
Finally, in domains where noise rates are approximately known, our methods can benefit from the
knowledge of noise rates. Our analysis shows that the methods are fairly robust to misspecification
of noise rates (See Appendix C for results).
6 Conclusions and Future Work
We addressed the problem of risk minimization in the presence of random classification noise, and
obtained general results in the setting using the methods of unbiased estimators and weighted loss
functions. We have given efficient algorithms for both the methods with provable guarantees for
learning under label noise. The proposed algorithms are easy to implement and the classification
performance is impressive even at high noise rates and competitive with state-of-the-art methods on
benchmark data. The algorithms already give a new family of methods that can be applied to the
positive-unlabeled learning problem [Elkan and Noto, 2008], but the implications of the methods for
this setting should be carefully analysed. We could consider harder noise models such as label noise
depending on the example, and ?nasty label noise? where labels to flip are chosen adversarially.
7 Acknowledgments
This research was supported by DOD Army grant W911NF-10-1-0529 to ID; PR acknowledges the
support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894.
3
A family of methods proposed by Crammer and coworkers [Crammer et al., 2006, 2009, Dredze et al.,
2008] could be compared to, but [Crammer and Lee, 2010] show that the 2 NHERD variants perform the best.
8
References
D. Angluin and P. Laird. Learning from noisy examples. Mach. Learn., 2(4):343?370, 1988.
Javed A. Aslam and Scott E. Decatur. On the sample complexity of noise-tolerant learning. Inf. Process. Lett.,
57(4):189?195, 1996.
Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal
of the American Statistical Association, 101(473):138?156, 2006.
Shai Ben-David, D?avid P?al, and Shai Shalev-Shwartz. Agnostic online learning. In Proceedings of the 22nd
Conference on Learning Theory, 2009.
Battista Biggio, Blaine Nelson, and Pavel Laskov. Support vector machines under adversarial label noise.
Journal of Machine Learning Research - Proceedings Track, 20:97?112, 2011.
Tom Bylander. Learning linear threshold functions in the presence of classification noise. In Proc. of the 7th
COLT, pages 340?347, NY, USA, 1994. ACM.
Nicol`o Cesa-Bianchi, Eli Dichterman, Paul Fischer, Eli Shamir, and Hans Ulrich Simon. Sample-efficient
strategies for learning in the presence of noise. J. ACM, 46(5):684?719, 1999.
Nicol`o Cesa-Bianchi, Shai Shalev-Shwartz, and Ohad Shamir. Online learning of noisy data. IEEE Transactions on Information Theory, 57(12):7907?7931, 2011.
K. Crammer and D. Lee. Learning via gaussian herding. In Advances in NIPS 23, pages 451?459, 2010.
Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online passive-aggressive
algorithms. J. Mach. Learn. Res., 7:551?585, 2006.
Koby Crammer, Alex Kulesza, and Mark Dredze. Adaptive regularization of weight vectors. In Advances in
NIPS 22, pages 414?422, 2009.
Mark Dredze, Koby Crammer, and Fernando Pereira. Confidence-weighted linear classification. In Proceedings
of the Twenty-Fifth ICML, pages 264?271, 2008.
C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In Proc. of the 14th ACM
SIGKDD intl. conf. on Knowledge discovery and data mining, pages 213?220, 2008.
Yoav Freund. A more robust boosting algorithm, 2009. preprint arXiv:0905.2138 [stat.ML] available
at http://arxiv.org/abs/0905.2138.
T. Graepel and R. Herbrich. The kernel Gibbs sampler. In Advances in NIPS 13, pages 514?520, 2000.
Roni Khardon and Gabriel Wachman. Noise tolerant variants of the perceptron algorithm. J. Mach. Learn.
Res., 8:227?248, 2007.
Neil D. Lawrence and Bernhard Sch?olkopf. Estimating a kernel Fisher discriminant in the presence of label
noise. In Proceedings of the Eighteenth ICML, pages 306?313, 2001.
Bing Liu, Yang Dai, Xiaoli Li, Wee Sun Lee, and Philip S Yu. Building text classifiers using positive and
unlabeled examples. In ICDM 2003., pages 179?186. IEEE, 2003.
Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boosters.
Mach. Learn., 78(3):287?304, 2010.
Yves Lucet. What shape is your conjugate? a survey of computational convex analysis and its applications.
SIAM Rev., 52(3):505?542, August 2010. ISSN 0036-1445.
Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization. To appear in IEEE Trans. Syst.
Man and Cybern. Part B, 2013. URL: http://arxiv.org/abs/1109.5231.
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic
programming. SIAM J. on Opt., 19(4):1574?1609, 2009.
David F. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise on the
precision of supervised learning techniques. Artif. Intell. Rev., 33(4):275?306, 2010.
Clayton Scott. Calibrated asymmetric surrogate losses. Electronic J. of Stat., 6:958?992, 2012.
Clayton Scott, Gilles Blanchard, and Gregory Handy. Classification with asymmetric label noise: Consistency
and maximal denoising. To appear in COLT, 2013.
G. Stempfel and L. Ralaivola. Learning kernel perceptrons on noisy data using random projections. In Algorithmic Learning Theory, pages 328?342. Springer, 2007.
G. Stempfel, L. Ralaivola, and F. Denis. Learning from noisy data using hyperplane sampling and sample
averages. 2007.
Guillaume Stempfel and Liva Ralaivola. Learning SVMs from sloppily labeled data. In Proc. of the 19th Intl.
Conf. on Artificial Neural Networks: Part I, pages 884?893. Springer-Verlag, 2009.
Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings
of the Twentieth ICML, pages 928?936, 2003.
9
| 5073 |@word trial:1 version:3 norm:1 stronger:1 nd:1 suitably:4 dekel:1 bylander:2 contraction:1 decomposition:1 pavel:1 biconjugate:5 harder:3 reduction:1 liu:4 lucet:2 tuned:1 interestingly:1 existing:1 analysed:1 yet:1 liva:1 realize:1 numerical:1 benign:1 shape:1 plot:5 juditsky:1 aside:1 instantiate:1 accordingly:1 short:1 lr:1 provides:2 boosting:3 denis:1 herbrich:2 hyperplanes:2 org:2 become:1 consists:1 prove:1 stempfel:10 introduce:2 theoretically:1 huber:2 indeed:1 expected:1 resolve:1 increasing:2 provided:3 project:2 moreover:1 underlying:3 bounded:2 panel:1 agnostic:1 estimating:1 what:3 argmin:4 minimizes:2 affirmative:1 developed:1 sloppily:1 transformation:1 guarantee:5 tewaria:1 every:2 multidimensional:1 k2:1 classifier:8 uk:1 control:1 grant:1 yn:1 appear:3 mcauliffe:1 positive:8 before:1 modify:1 mistake:2 consequence:1 approxb:1 despite:1 mach:4 id:1 approximately:1 pam:2 twice:1 studied:2 quantified:1 wachman:3 nemirovski:3 kw2:1 practical:3 acknowledgment:1 practice:3 regret:1 implement:1 differs:1 handy:1 sq:1 procedure:1 empirical:13 projection:3 confidence:2 regular:2 seeing:1 suggest:1 symbolic:1 get:2 unlabeled:5 close:5 ralaivola:8 risk:38 context:1 cybern:1 measurable:1 equivalent:1 zinkevich:1 yt:5 eighteenth:1 elusive:1 starting:1 independently:4 convex:39 survey:3 formalized:1 immediately:1 estimator:11 insight:1 notion:3 analogous:1 limiting:1 controlling:1 suppose:1 shamir:2 exact:3 programming:2 us:2 designing:1 elkan:3 diabetes:1 satisfying:1 natarajan:1 continues:1 asymmetric:2 labeled:1 observed:5 preprint:1 coincidence:1 calculate:1 pradeepr:1 richness:1 sun:1 convexity:3 complexity:4 prescribes:1 depend:1 blaine:1 learner:3 estima:1 exi:1 train:1 artificial:1 tell:1 shalev:3 quite:1 heuristic:1 widely:1 valued:4 otherwise:1 statistic:1 fischer:1 neil:1 nondecreasing:3 noisy:41 laird:2 itself:1 online:8 sequence:1 differentiable:1 propose:1 aro:1 maximal:1 uci:4 achieve:2 intuitive:2 olkopf:2 getting:1 rademacher:4 intl:2 generating:1 comparative:1 converges:1 ben:2 depending:1 develop:3 ac:1 stat:2 strong:1 c:1 involves:2 correct:2 modifying:2 stochastic:4 vc:1 require:1 nagarajan:1 fix:1 opt:1 hold:2 around:1 considered:1 exp:1 great:2 lawrence:2 uea:1 algorithmic:1 tor:1 achieves:3 noto:3 estimation:3 proc:3 injecting:1 label:59 utexas:1 sensitive:1 largest:1 establishes:1 weighted:22 minimization:17 hope:2 always:1 gaussian:3 modified:3 pn:1 cmp:1 blush:1 corollary:2 focus:2 bernoulli:1 contrast:2 adversarial:1 sigkdd:1 sense:1 helpful:1 dependent:8 typically:1 unlikely:1 kernelized:1 interested:1 provably:2 issue:2 classification:31 overall:1 ill:1 denoted:3 colt:2 proposes:1 art:4 special:2 fairly:2 mutual:1 construct:2 sampling:2 flipped:4 adversarially:1 koby:3 icml:3 yu:1 minf:1 jon:1 future:1 minimized:1 imately:1 randomly:1 wee:1 intell:1 familiar:1 n1:1 ab:2 mining:1 evaluation:1 pradeep:1 implication:1 poorer:1 ccn:4 partial:1 necessary:1 ohad:1 divide:1 re:2 battista:1 theoretical:3 instance:5 fenchel:2 w911nf:2 yoav:1 cost:6 entry:1 uniform:5 predictor:1 dod:1 learnability:3 characterize:1 reported:1 answer:1 corrupted:8 gregory:1 kxi:1 synthetic:6 calibrated:5 unbiasedness:1 chooses:1 fundamental:2 siam:2 lee:5 invertible:1 theoval:1 michael:1 squared:3 cesa:4 choose:1 conf:2 booster:1 american:1 bx:2 li:1 syst:1 aggressive:2 potential:3 account:1 summarized:1 bold:1 includes:1 blanchard:1 satisfy:1 depends:2 break:1 try:1 aslam:2 characterizes:2 competitive:6 bayes:10 start:1 complicated:1 shai:4 simon:1 contribution:1 minimize:2 yves:1 accuracy:14 efficiently:3 yield:1 bayesian:1 iid:5 corruption:1 randomness:1 history:1 classified:1 herding:1 infinitesimal:1 against:1 servedio:2 nonetheless:2 ty:1 associated:1 proof:2 recovers:1 dataset:8 recall:1 knowledge:4 color:1 graepel:2 carefully:1 higher:1 tolerate:1 supervised:2 adaboost:1 tom:1 done:3 known1:1 though:3 furthermore:2 just:2 implicit:1 hand:2 eqn:1 logistic:4 grows:1 artif:1 dredze:4 building:1 usa:1 effect:1 true:6 unbiased:16 deliberately:1 former:1 manwani:3 regularization:1 symmetric:1 dhillon:1 deal:1 round:1 width:3 coincides:1 biggio:2 generalized:2 leftmost:1 outline:1 performs:1 bring:1 passive:2 hin:5 image:2 recently:2 rcn:2 empirically:1 overview:1 defeat:1 association:1 interpretation:1 kwk2:1 refer:1 gibbs:1 enjoyed:1 rd:10 sastry:3 consistency:2 similarly:1 approx:3 tuning:1 calibration:1 han:1 impressive:2 showed:1 moderate:1 inf:1 certain:3 verlag:1 binary:2 seen:1 dai:1 ey:1 surely:1 fernando:1 coworkers:1 ii:2 multiple:1 reduces:1 smooth:1 cross:6 long:2 nherd:6 offer:1 concerning:1 icdm:1 ravikumar:1 naga86:1 prediction:2 variant:4 regression:2 breast:2 arxiv:3 kernel:6 background:1 whereas:1 addressed:2 finiteness:1 appropriately:2 biased:3 sch:2 w2:3 ascent:1 leveraging:2 jordan:1 presence:10 yang:1 split:1 easy:2 irreducibility:1 idea:3 avid:1 texas:1 whether:1 bartlett:2 url:1 peter:1 roni:1 matlab:1 gabriel:1 tewari:1 tune:3 amount:1 svms:2 angluin:2 http:3 shapiro:1 exist:1 nsf:1 sign:9 estimated:1 per:2 correctly:1 track:1 nettleton:2 write:1 promise:1 key:2 gunnar:1 terminology:1 threshold:7 lan:1 drawn:2 preprocessed:1 clean:14 decatur:2 fraction:1 eli:2 everywhere:1 arrive:1 family:3 almost:2 decide:1 electronic:1 decision:5 appendix:4 bound:12 pay:1 laskov:1 precisely:1 alex:1 your:1 x2:2 aspect:1 u1:1 simulate:1 min:7 thyroid:2 performing:1 separable:4 relatively:1 martin:1 department:2 conjugate:1 across:1 separability:1 joseph:1 rev:2 pr:1 xiaoli:1 taken:1 heart:1 bing:1 abbreviated:1 turn:1 german:2 singer:1 know:2 flip:3 end:1 umich:1 generalizes:1 available:2 ofer:1 apply:1 observe:2 robustness:1 dichterman:1 rp:2 original:1 ensure:3 include:1 hinge:7 yoram:1 build:1 classical:1 puig:1 question:2 quantity:1 occurs:1 already:1 strategy:1 rocco:1 dependence:1 traditional:1 surrogate:10 said:1 gradient:2 philip:2 nelson:1 discriminant:1 provable:2 issn:1 minimizing:4 setup:1 difficult:1 statement:1 argminf:1 stated:1 negative:3 twenty:1 perform:1 bianchi:4 javed:1 gilles:1 observation:3 datasets:7 benchmark:7 descent:1 misspecification:1 banana:3 y1:1 nasty:1 august:1 community:1 david:3 clayton:2 cast:1 pair:1 specified:1 nip:3 trans:1 address:2 able:2 adversary:2 below:4 scott:8 kulesza:1 summarize:1 ambuj:1 max:1 including:1 misclassification:2 scheme:1 ready:1 acknowledges:1 text:1 understanding:1 literature:1 l2:1 discovery:1 nicol:2 freund:2 loss:70 expect:1 interesting:1 remarkable:2 validation:6 proxy:8 article:2 principle:1 ulrich:1 row:1 cancer:2 austin:1 course:1 penalized:1 surprisingly:1 repeat:1 soon:1 free:4 copy:1 supported:1 side:1 perceptron:4 fifth:1 benefit:1 tolerance:1 dimension:2 calculated:1 xn:2 world:1 unweighted:1 computes:1 lett:1 made:1 clue:1 adaptive:1 transaction:1 excess:4 approximate:1 bernhard:1 dealing:3 ml:1 global:1 tolerant:5 rid:1 xi:3 shwartz:3 table:2 learn:6 robust:7 inherently:1 symmetry:4 obtaining:1 investigated:1 necessarily:2 domain:1 main:10 linearly:3 bounding:1 noise:70 arise:1 paul:1 naresh:1 x1:2 representative:1 ny:1 precision:1 pereira:1 khardon:3 lie:1 third:2 learns:1 theorem:15 down:1 specific:3 pac:1 hub:1 svm:15 exists:4 importance:3 mirror:1 juncture:1 keshet:1 margin:2 gap:1 suited:1 supf:1 michigan:1 simply:1 army:1 twentieth:1 arow:1 inderjit:2 applies:1 springer:2 minimizer:11 satisfies:3 relies:1 acm:3 conditional:4 goal:1 viewed:1 ann:1 price:1 lipschitz:4 fisher:1 change:1 man:1 except:1 uniformly:2 hyperplane:2 sampler:1 denoising:1 lemma:13 called:2 arbor:1 experimental:1 atsch:1 perceptrons:1 guillaume:1 support:2 mark:2 latter:1 crammer:11 |
4,502 | 5,074 | Low-rank matrix reconstruction and clustering via
approximate message passing
Ryosuke Matsushita
NTT DATA Mathematical Systems Inc.
1F Shinanomachi Rengakan, 35,
Shinanomachi, Shinjuku-ku, Tokyo,
160-0016, Japan
[email protected]
Toshiyuki Tanaka
Department of Systems Science,
Graduate School of Informatics, Kyoto University
Yoshida Hon-machi, Sakyo-ku, Kyoto-shi,
606-8501 Japan
[email protected]
Abstract
We study the problem of reconstructing low-rank matrices from their noisy observations. We formulate the problem in the Bayesian framework, which allows
us to exploit structural properties of matrices in addition to low-rankedness, such
as sparsity. We propose an efficient approximate message passing algorithm, derived from the belief propagation algorithm, to perform the Bayesian inference for
matrix reconstruction. We have also successfully applied the proposed algorithm
to a clustering problem, by reformulating it as a low-rank matrix reconstruction
problem with an additional structural property. Numerical experiments show that
the proposed algorithm outperforms Lloyd?s K-means algorithm.
1
Introduction
Low-rankedness of matrices has frequently been exploited when one reconstructs a matrix from its
noisy observations. In such problems, there are often demands to incorporate additional structural
properties of matrices in addition to the low-rankedness. In this paper, we consider the case where
a matrix A0 ? Rm?N to be reconstructed is factored as A0 = U0 V0? , U0 ? Rm?r , V0 ? RN ?r
(r ? m, N ), and where one knows structural properties of the factors U0 and V0 a priori. Sparseness
and non-negativity of the factors are popular examples of such structural properties [1, 2].
Since the properties of the factors to be exploited vary according to the problem, it is desirable
that a reconstruction method has enough flexibility to incorporate a wide variety of properties. The
Bayesian approach achieves such flexibility by allowing us to select prior distributions of U0 and V0
reflecting a priori knowledge on the structural properties. The Bayesian approach, however, often
involves computationally expensive processes such as high-dimensional integrations, thereby requiring approximate inference methods in practical implementations. Monte Carlo sampling methods
and variational Bayes methods have been proposed for low-rank matrix reconstruction to meet this
requirement [3?5].
We present in this paper an approximate message passing (AMP) based algorithm for Bayesian lowrank matrix reconstruction. Developed in the context of compressed sensing, the AMP algorithm reconstructs sparse vectors from their linear measurements with low computational cost, and achieves
a certain theoretical limit [6]. AMP algorithms can also be used for approximating Bayesian inference with a large class of prior distributions of signal vectors and noise distributions [7]. These
successes of AMP algorithms motivate the use of the same idea for low-rank matrix reconstruction.
The IterFac algorithm for the rank-one case [8] has been derived as an AMP algorithm. An AMP
algorithm for the general-rank case is proposed in [9], which, however, can only treat estimation of
posterior means. We extend their algorithm so that one can deal with other estimations such as the
maximum a posteriori (MAP) estimation. It is the first contribution of this paper.
1
As the second contribution, we apply the derived AMP algorithm to K-means type clustering to
obtain a novel efficient clustering algorithm. It is based on the observation that our formulation
of the low-rank matrix reconstruction problem includes the clustering problem as a special case.
Although the idea of applying low-rank matrix reconstruction to clustering is not new [10, 11], our
proposed algorithm is, to our knowledge, the first that directly deals with the constraint that each
datum should be assigned to exactly one cluster in the framework of low-rank matrix reconstruction.
We present results of numerical experiments, which show that the proposed algorithm outperforms
Lloyd?s K-means algorithm [12] when data are high-dimensional.
Recently, AMP algorithms for dictionary learning and blind calibration [13] and for matrix reconstruction with a generalized observation model [14] were proposed. Although our work has some
similarities to these studies, it differs in that we fix the rank r rather than the ratio r/m when taking
the limit m, N ? ? in the derivation of the algorithm. Another difference is that our formulation,
explained in the next section, does not assume statistical independence among the components of
each row of U0 and V0 . A detailed comparison among these algorithms remains to be made.
2
2.1
Problem setting
Low-rank matrix reconstruction
We consider the following problem setting. A matrix A0 ? Rm?N to be estimated is defined
by two matrices U0 := (u0,1 , . . . , u0,m )? ? Rm?r and V0 := (v0,1 , . . . , v0,N )? ? RN ?r as
A0 := U0 V0? , where u0,i , v0,j ? Rr . We consider the case where r ? m, N . Observations of A0
are corrupted by additive noise W ? Rm?N , whose components Wi,j are i.i.d. Gaussian random
variables following N (0, m? ). Here ? > 0 is a noise variance parameter and N (a, ? 2 ) denotes the
Gaussian distribution with mean a and variance ? 2 . The factor m in the noise variance is introduced
to allow a proper scaling in the limit where m and N go to infinity in the same order, which is
employed in deriving the algorithm. An observed matrix A ? Rm?N is given by A := A0 + W .
Reconstructing A0 and (U0 , V0 ) from A is the problem considered in this paper.
We take the Bayesian approach to address this problem, in which one requires prior distributions
of variables to be estimated, as well as conditional distributions relating observations with variables
to be estimated. These distributions need not be the true ones because in some cases they are not
available so that one has to assume them arbitrarily, and in some other cases one expects advantages
by assuming them in some specific manner in view of computational efficiencies. In this paper, we
suppose that one uses the true conditional distribution
(
)
1
1
p(A|U0 , V0 ) =
?A ? U0 V0? ?2F ,
(1)
mN exp ?
2m?
(2?m? ) 2
where ? ? ?F denotes the Frobenius norm. Meanwhile, we suppose that the assumed prior distributions of U0 and V0 , denoted by p?U and p?V , respectively, may be different from the true distributions
?
pU and pV , respectively.
We restrict p?U and p?V to distributions of the form p?U (U0 ) = i p?u (u0,i )
?
and p?V (V0 ) = j p?v (v0,j ), respectively, which allows us to construct computationally efficient
algorithms. When U ? p?U (U ) and V ? p?V (V ), the posterior distribution of (U, V ) given A is
(
)
1
?A ? U V ? ?2F p?U (U )?
pV (V ).
(2)
p?(U, V |A) ? exp ?
2m?
Prior probability density functions (p.d.f.s) p?u and p?v can be improper, that is, they can integrate to
infinity, as long as the posterior p.d.f. (2) is proper. We also consider cases where the assumed rank
r? may be different from the true rank r. We thus suppose that estimates U and V are of size m ? r?
and N ? r?, respectively.
We consider two problems appearing in the Bayesian approach. The first problem, which we call
the marginalization problem, is to calculate the marginal posterior distributions given A,
?
?
?
p?i,j (ui , vj |A) := p?(U, V |A)
duk
dvl .
(3)
k?=i
?
l?=j
These are used to calculate
the posterior mean E[U V |A] and the
?
? marginal MAP estimates
MMAP
uMMAP
:=
arg
max
p
?
(u,
v|A)dv
and
v
:=
arg
max
p?i,j (u, v|A)du. Because
u
i,j
v
i
j
2
calculation of p?i,j (ui , vj |A) typically involves high-dimensional integrations requiring high computational cost, approximation methods are needed.
The second problem, which we call the MAP problem, is to calculate the MAP estimate
arg maxU,V p?(U, V |A). It is formulated as the following optimization problem:
min C MAP (U, V ),
U,V
(4)
where C MAP (U, V ) is the negative logarithm of (2):
C MAP (U, V ) :=
m
N
?
?
1
?A ? U V ? ?2F ?
log p?u (ui ) ?
log p?v (vj ).
2m?
i=1
j=1
(5)
Because ?A ? U V ? ?2F is a non-convex function of (U, V ), it is generally hard to find the global
optimal solutions of (4) and therefore approximation methods are needed in this problem as well.
2.2
Clustering as low-rank matrix reconstruction
A clustering problem can be formulated as a problem of low-rank matrix reconstruction [11]. Suppose that v0,j ? {e1 , . . . , er }, j = 1, . . . , N , where el ? {0, 1}r is the vector whose lth component
is 1 and the others are 0. When V0 and U0 are fixed, aj follows one of the r Gaussian distributions
? 0,l , m? I), l = 1, . . . , r, where u
? 0,l is the lth column of U0 . We regard that each Gaussian
N (u
? 0,l being the center of cluster l and v0,j representing the cluster
distribution defines a cluster, u
assignment of the datum aj . One can then perform clustering on the dataset {a1 , . . . , aN } by reconstructing U0 and V0 from A = (a1 , . . . , aN ) under the structural constraint that every row of V0
should belong to {e1 , . . . , er?}, where r? is an assumed number of clusters.
Let us consider maximum likelihood estimation arg maxU,V p(A|U, V ), or equivalently, MAP esti?r?
mation with the (improper) uniform prior distributions p?u (u) = 1 and p?v (v) = r??1 l=1 ?(v?el ).
The corresponding MAP problem is
min
r ,V ?{0,1}N ??
r
U ?Rm??
?A ? U V ? ?2F
subject to vj ? {e1 , . . . , er?}.
(6)
?N ?r?
When V satisfies the constraints, the objective function ?A ? U V ? ?2F =
j=1
l=1 ?aj ?
? l ?22 I(vj = el ) is the sum of squared distances, each of which is between a datum and the center of
u
the cluster that the datum is assigned to. The optimization problem (6), its objective function, and
clustering based on it are called in this paper the K-means problem, the K-means loss function, and
the K-means clustering, respectively.
One can also use the marginal MAP estimation for clustering. If U0 and V0 follow p?U and p?V , respectively, the marginal MAP estimation is optimal in the sense that it maximizes the expectation of
accuracy with respect to p?(V0 |A). Here, accuracy is defined as the fraction of correctly assigned data
among all data. We call the clustering using approximate marginal MAP estimation the maximum
accuracy clustering, even when incorrect prior distributions are used.
3
Previous work
Existing methods for approximately solving the marginalization problem and the MAP problem
are divided into stochastic methods such as Markov-Chain Monte-Carlo methods and deterministic
ones. A popular deterministic method is to use the variational Bayesian formalism. The variational
Bayes matrix factorization [4, 5] approximates the posterior distribution p(U, V |A) as the product
VB
of two functions pVB
U (U ) and pV (V ), which are determined so that the Kullback-Leibler (KL)
VB
VB
divergence from pU (U )pV (V ) to p(U, V |A) is minimized. Global minimization of the KL divergence is difficult except for some special cases [15], so that an iterative method to obtain a local
minimum is usually adopted. Applying the variational Bayes matrix factorization to the MAP problem, one obtains the iterated conditional modes (ICM) algorithm, which alternates minimization of
C MAP (U, V ) over U for fixed V and minimization over V for fixed U .
The representative algorithm to solve the K-means problem approximately is Lloyd?s K-means algorithm [12]. Lloyd?s K-means algorithm is regarded as the ICM algorithm: It alternates minimization
of the K-means loss function over U for fixed V and minimization over V for fixed U iteratively.
3
Algorithm 1 (Lloyd?s K-means algorithm).
ntl =
N
?
I(vjt = el ),
? tl =
u
j=1
ljt+1 = arg
min
l?{1,...,?
r}
? tl ?22 ,
?aj ? u
N
1 ?
aj I(vjt = el ),
ntl j=1
(7a)
vjt+1 = elt+1 .
(7b)
j
Throughout this paper, we represent an algorithm by a set of equations as in the above. This representation means that the algorithm begins with a set of initial values and repeats the update of the
variables using the equations presented until it satisfies some stopping criteria. Lloyd?s K-means
algorithm begins with a set of initial assignments V 0 ? {e1 , . . . , er?}N . This algorithm easily gets
stuck in local minima and its performance heavily depends on the initial values of the algorithm.
Some methods for initialization to obtain a better local minimum are proposed [16].
Maximum accuracy clustering can be solved approximately by using the variational Bayes matrix
factorization, since it gives an approximation to the marginal posterior distribution of vj given A.
4
4.1
Proposed algorithm
Approximate message passing algorithm for low-rank matrix reconstruction
We first discuss the general idea of the AMP algorithm and advantages of the AMP algorithm compared with the variational Bayes matrix factorization. The AMP algorithm is derived by approximating the belief propagation message passing algorithm in a way thought to be asymptotically exact for
large-scale problems with appropriate randomness. Fixed points of the belief propagation message
passing algorithm correspond to local minima of the KL divergence between a kind of trial function
and the posterior distribution [17]. Therefore, the belief propagation message passing algorithm can
be regarded as an iterative algorithm based on an approximation of the posterior distribution, which
is called the Bethe approximation. The Bethe approximation can reflect dependence of random variables (dependence between U and V in p?(U, V |A) in our problem) to some extent. Therefore, one
can intuitively expect that performance of the AMP algorithm is better than that of the variational
Bayes matrix factorization, which treats U and V as if they were independent in p?(U, V |A).
An important property of the AMP algorithm, aside from its efficiency and effectiveness, is that
one can predict performance of the algorithm accurately for large-scale problems by using a set of
equations, called the state evolution [6]. Analysis with the state evolution also shows that required
iteration numbers are O(1) even when the problem size is large. Although we can present the state
evolution for the algorithm proposed in this paper and give a proof of its validity like [8, 18], we do
not discuss the state evolution here due to the limited space available.
We introduce a one-parameter extension of the posterior distribution p?(U, V |A) to treat the marginalization problem and the MAP problem in a unified manner. It is defined as follows:
(
)(
)?
?
p?(U, V |A; ?) ? exp ?
?A ? U V ? ?2F p?U (U )?
pV (V ) ,
2m?
(8)
which is proportional to p?(U, V |A)? , where ? > 0 is the parameter. When ? = 1, p?(U, V |A; ?)
is reduced to p?(U, V |A). In the limit ? ? ?, the distribution p?(U, V |A; ?) concentrates on the
maxima of p?(U, V |A). An algorithm for the marginalization problem on p?(U, V |A; ?) is particularized to the algorithms for the marginalization problem and for the MAP problem for the original
posterior distribution p?(U, V |A) by letting ? = 1 and ? ? ?, respectively. The AMP algorithm
for the marginalization problem on p?(U, V |A; ?) is derived in a way similar to that described in [9],
as detailed in the Supplementary Material.
In the derived algorithm, the values of variables But = (btu,1 , . . . , btu,m )? ? Rm??r , Bvt =
(btv,1 , . . . , btv,N )? ? RN ??r , ?tu ? Rr???r , ?tv ? Rr???r , U t = (ut1 , . . . , utm )? ? Rm??r ,
t ?
t
V t = (v1t , . . . , vN
) ? RN ??r , S1t , . . . , Sm
? Rr???r , and T1t , . . . , TNt ? Rr???r are calculated iteratively, where the superscript t ? N ? {0} represents iteration numbers. Variables with a negative
iteration number are defined as 0. The algorithm is as follows:
4
Algorithm 2.
But =
N
1
1 t?1 ? t
AV t ?
U
Tj ,
m?
m?
j=1
?tu =
N
N
1 ? t
1
1 ? t
(V t )? V t +
Tj ?
T , (9a)
m?
?m? j=1
m? j=1 j
uti = f (btu,i , ?tu ; p?u ),
Sit = G(btu,i , ?tu ; p?u ),
m
m
m
1
1 ? t
1 t? t
1 ? t
1 ? t
Si , ?tv =
A U ?
V
(U t )? U t +
Si ?
S ,
Bvt =
m?
m?
m?
?m? i=1
m? i=1 i
i=1
vjt+1 = f (btv,j , ?tv ; p?v ),
Tjt+1 = G(btv,j , ?tv ; p?v ).
(9b)
(9c)
(9d)
Algorithm 2 is almost symmetric in U and V . Equations (9a)?(9b) and (9c)?(9d) update quantities
related to the estimates of U0 and V0 , respectively. The algorithm requires an initial value V 0 and
begins with Tj0 = O. The functions f (?, ?; p?) : Rr??Rr???r ? Rr? and G(?, ?; p?) : Rr??Rr???r ? Rr???r ,
which have a p.d.f. p? : Rr? ? R as a parameter, are defined by
?
?f (b, ?; p?)
f (b, ?; p?) := u?
q (u; b, ?, p?)du,
G(b, ?; p?) :=
,
(10)
?b
where q?(u; b, ?, p?) is the normalized p.d.f. of u defined by
( (1
))
q?(u; b, ?, p?) ? exp ?? u? ?u ? b? u ? log p?(u) .
2
(11)
One can see that f (b, ?; p?) is the mean of the distribution q?(u; b, ?, p?) and that G(b, ?; p?) is its
covariance matrix scaled by ?. The function f (b, ?; p?) need not be differentiable everywhere;
Algorithm 2 works if f (b, ?; p?) is differentiable at b for which one needs to calculate G(b, ?; p?) in
running the algorithm.
We assume in the rest of this section the convergence of Algorithm 2, although the convergence is
?
?
?
?
?
not guaranteed in general. Let Bu? , Bv? , ??
be the converged values
u , ?v , Si , Tj , U , and V
of the respective variables. First, consider running Algorithm 2 with ? = 1. The marginal posterior
distribution is then approximated as
?
?
?v ).
?u )?
q (vj ; b?
p?i,j (ui , vj |A) ? q?(ui ; b?
v,j , ?v , p
u,i , ?u , p
(12)
?
?
?
Since u?
of q?(u; b?
?u ) and q?(v; b?
?v ), respectively, the
i and vj are the means
u,i , ?u , p
v,j , ?v , p
?
?
?
posterior mean E[U V |A] = U V p?(U, V |A)dU dV is approximated as
E[U V ? |A] ? U ? (V ? )? .
(13)
and vjMMAP are approximated as
The marginal MAP estimates uMMAP
i
?
?v ).
vjMMAP ? arg max q?(v; b?
v,j , ?v , p
?
?u ),
uMMAP
? arg max q?(u; b?
u,i , ?u , p
i
v
u
(14)
Taking the limit ? ? ? in Algorithm 2 yields an algorithm for the MAP problem (4). In this case,
the functions f and G are replaced with
[1
]
?f? (b, ?; p?)
f? (b, ?; p?) := arg min u? ?u ? b? u ? log p?(u) , G? (b, ?; p?) :=
. (15)
u
2
?b
One may calculate G? (b, ?; p?) from the Hessian of log p?(u) at u = f? (b, ?; p?), denoted by H,
(
)?1
via the identity G? (b, ?; p?) = ??H
. This identity follows from the implicit function theorem
under some additional assumptions and helps in the case where the explicit form of f? (b, ?; p?) is
not available. The MAP estimate is approximated by (U ? , V ? ).
4.2
Properties of the algorithm
Algorithm 2 has several plausible properties. First, it has a low computational cost. The computational cost per iteration is O(mN ), which is linear in the number of components of the matrix
A. Calculation of f (?, ?; p?) and G(?, ?; p?) is performed O(N + m) times per iteration. The constant
5
factor depends on p? and ?. Calculation of f for ? < ? generally involves an r?-dimensional numerical integration, although they are not needed in cases where an analytic expression of the integral
is available and cases where the variables take only discrete values. Calculation of f? involves
minimization over an r?-dimensional vector. When ? log p? is a convex function and ? is positive
semidefinite, this minimization problem is convex and can be solved at relatively low cost.
Second, Algorithm 2 has a form similar to that of an algorithm based on the variational Bayesian
matrix factorization. In fact, if the last terms on the right-hand sides of the four equations in (9a)
and (9c) are removed, the resulting algorithm is the same as an algorithm based on the variational
Bayesian matrix factorization proposed in [4] and, in particular, the same as the ICM algorithm when
? ? ?. (Note, however, that [4] only treats the case where the priors p?u and p?v are multivariate
Gaussian distributions.) Note that additional computational cost for these extra terms is O(m + N ),
which is insignificant compared with the cost of the whole algorithm, which is O(mN ).
Third, when one deals with the MAP problem, the value of C MAP (U, V ) may increase in iterations of Algorithm 2. The following proposition, however, guarantees optimality of the output of
Algorithm 2 in a certain sense, if it has converged.
?
Proposition 1. Let (U ? , V ? , S1? , . . . , Sm
, T ? , . . . , TN? ) be a fixed point of the AMP algorithm
?m 1 ?
?N
for the MAP problem and suppose that i=1 Si and j=1 Tj? are positive semidefinite. Then
U ? is a global minimum of C MAP (U, V ? ) and V ? is a global minimum of C MAP (U ? , V ).
The proof is in the Supplementary Material. The key to the proof is the following reformulation:
N
)]
[
(
)
( 1 ?
MAP
t
t?1
Tjt (U ? U t?1 )?
U = arg min C
(U, V ) ? tr (U ? U )
U
2m? j=1
t
(16)
?N
If j=1 Tjt is positive semidefinite, the second term of the minimand is the negative squared pseudometric between U and U t?1 , which is interpreted as a penalty on nearness to the temporal estimate.
?m
?N
Positive semidefiniteness of i=1 Sit and j=1 Tjt holds in almost all cases. In fact, we only have
to assume lim??? G(b, ?; p?) = G? (b, ?; p?), since G(b, ?; p?) is a scaled covariance matrix of
q?(u; b, ?, p?), which is positive semidefinite. It follows from Proposition 1 that any fixed point of the
AMP algorithm is also a fixed point of the ICM algorithm. It has two implications: (i) Execution
of the ICM algorithm initialized with the converged values of the AMP algorithm does not improve
C MAP (U t , V t ). (ii) The AMP algorithm has not more fixed points than the ICM algorithm. The
second implication may help the AMP algorithm avoid getting stuck in bad local minima.
4.3
Clustering via AMP algorithm
One can use the AMP algorithm for the MAP problem to perform the K-means clustering by letting
?r?
p?u (u) = 1 and p?v (v) = r??1 l=1 ?(v ? el ). Noting that f? (b, ?; p?v ) is piecewise constant with
respect to b and hence G? (b, ?; p?v ) is O almost everywhere, we obtain the following algorithm:
Algorithm 3 (AMP algorithm for the K-means clustering).
1
1
AV t , ?tu =
(V t )? V t , U t = But (?tu )?1 ,
m?
m?
1 ? t 1 t t
1
1
Bvt =
A U ? V S , ?tv =
(U t )? U t ? S t ,
m?
?
m?
?
[1
]
v ? ?tv v ? v ? btv,j .
vjt+1 = arg
min
v?{e1 ,...,er? } 2
But =
S t = (?tu )?1 ,
(17a)
(17b)
(17c)
It is initialized with an assignment V 0 ? {e1 , . . . , er?}N . Algorithm 3 is rewritten as follows:
ntl =
N
?
j=1
ljt+1 = arg
I(vjt = el ),
? tl =
u
N
1 ?
aj I(vjt = el ),
ntl j=1
[ 1
2m
m]
? tl ?22 + t I(vjt = el ) ? t ,
?aj ? u
nl
nl
l?{1,...,?
r } m?
min
6
(18a)
vjt+1 = elt+1 .
j
(18b)
The parameter ? appearing in
algorithm does not exist in the?
K-means clustering problem. In
?the
m
m
fact, ? appears because m?2 i=1 A2ij Sit was estimated by ? m?1 i=1 Sit in deriving Algorithm 2,
which can be justified for large-sized problems. In practice, we propose using m?2 N ?1 ?A ?
U t (V t )? ?2F as a temporary estimate of ? at tth iteration. While the AMP algorithm for the Kmeans clustering updates the value of U in the same way as Lloyd?s K-means algorithm, it performs
assignments of data to clusters in a different way. In the AMP algorithm, in addition to distances
from data to centers of clusters, the assignment at present is taken into consideration in two ways:
(i) A datum is less likely to be assigned to the cluster that it is assigned to at present. (ii) Data are
more likely to be assigned to a cluster whose size at present is smaller. The former can intuitively be
understood by observing that if vjt = el , one should take account of the fact that the cluster center
? tl is biased toward aj . The term 2m(ntl )?1 I(vjt = el ) in (18b) corrects this bias, which, as it
u
should be, is inversely proportional to the cluster size.
The AMP algorithm for maximum accuracy clustering is obtained by letting ? = 1 and p?v (v) be
a discrete distribution on {e1 , . . . , er?}. After the algorithm converges, arg maxv q?(v; vj? , ??
?v )
v ,p
gives the final cluster assignment of the jth datum and U ? gives the estimate of the cluster centers.
5
Numerical experiments
We conducted numerical experiments on both artificial and real data sets to evaluate performance
of the proposed algorithms for clustering. In the experiment on artificial data sets, we set m = 800
? 0,l , l = 1, . . . , r, were generated according to the
and N = 1600 and let r? = r. Cluster centers u
multivariate Gaussian distribution N (0, I). Cluster assignments v0,j , j = 1, . . . , N, were generated
according to the uniform distribution on {e1 , . . . , er }. For fixed ? = 0.1 and r, we generated 500
problem instances and solved them with five algorithms: Lloyd?s K-means algorithm (K-means),
the AMP algorithm for the K-means clustering (AMP-KM), the variational Bayes matrix factorization [4] for maximum accuracy clustering (VBMF-MA), the AMP algorithm for maximum accuracy
clustering (AMP-MA), and the K-means++ [16]. The K-means++ updates the variables in the same
way as Lloyd?s K-means algorithm with an initial value chosen in a sophisticated manner. For the
other algorithms, initial values vj0 , j = 1, . . . , N, were randomly generated from the same distribution as v0,j . We used the true prior distributions of U and V for maximum accuracy clustering.
We ran Lloyd?s K-means algorithm and the K-means++ until no change was observed. We ran the
AMP algorithm for the K-means clustering until either V t = V t?1 or V t = V t?2 is satisfied.
This is because we observed oscillations of assignments of a small number of data. For the other
two algorithms, we terminated the iteration when ?U t ? U t?1 ?2F < 10?15 ?U t?1 ?2F and ?V t ?
V t?1 ?2F < 10?15 ?V t?1 ?2F were met or the number of iterations exceeded 3000. We then evaluated
the following performance measures for the obtained solution (U ? , V ? ):
?N
?
? := N1 N
? ?22 ), where a
? Normalized K-means loss ?A?U ? (V ? )? ?2F /( j=1 ?aj ? a
j=1 aj .
?
N
? Accuracy maxP N ?1 j=1 I(P vj? = v0,j ), where the maximization is taken over all
r-by-r permutation matrices. We used the Hungarian algorithm [19] to solve this maximization problem efficiently.
? Number of iterations needed to converge.
We calculated the averages and the standard deviations of these performance measures over 500
instances. We conducted the above experiments for various values of r.
Figure 1 shows the results. The AMP algorithm for the K-means clustering achieves the smallest Kmeans loss among the five algorithms, while the Lloyd?s K-means algorithm and K-means++ show
large K-means losses for r ? 5. We emphasize that all the three algorithms are aimed to minimize
the same K-means loss and the differences lie in the algorithms for minimization. The AMP algorithm for maximum accuracy clustering achieves the highest accuracy among the five algorithms. It
also shows fast convergence. In particular, the convergence speed of the AMP algorithm for maximum accuracy clustering is comparable to that of the AMP algorithm for the K-means clustering
when the two algorithms show similar accuracy (r < 9). This is in contrast to the common observation that the variational Bayes method often shows slower convergence than the ICM algorithm.
7
1
1
K-means
AMP-KM
VBMF-MA
AMP-MA
K-means++
K-means
AMP-KM
VBMF-MA
AMP-MA
K-means++
0.8
0.99
Accuracy
Normalized K-means loss
0.995
0.985
0.98
0.6
0.4
0.2
0.975
0.97
2
4
6
8
10
r
12
14
16
0
18
2
4
6
8
(a)
r
12
14
16
18
(b)
2500
1
K-means
AMP-KM
VBMF-MA
AMP-MA
K-means++
2000
Number of iterations
10
0.8
Accuracy
1500
1000
500
0.6
0.4
AMP-KM
VBMF-MA
AMP-MA
0.2
0
2
4
6
8
10
r
12
14
16
0
18
0
10
20
30
Iteration number
(c)
40
50
(d)
Figure 1: (a)?(c) Performance for different r: (a) Normalized K-means loss. (b) Accuracy. (c)
Number of iterations needed to converge. (d) Dynamics for r = 5. Average accuracy at each
iteration is shown. Error bars represent standard deviations.
0.45
0.75
K-means++
AMP-KM
K-means++
AMP-KM
0.7
0.44
Accuracy
Normalized K-means loss
0.46
0.43
0.42
0.65
0.6
0.41
0.55
0.4
0.39
0
10
20
30
Number of trials
40
0.5
50
(a)
0
10
20
30
Number of trials
40
50
(b)
Figure 2: Performance measures in real-data experiments. (a) Normalized K-means loss. (b) Accuracy. The results for the 50 trials are shown in the descending order of performance for AMP-KM.
The worst two results for AMP-KM are out of the range.
In the experiment on real data, we used the ORL Database of Faces [20], which contains 400 images
of human faces, ten different images of each of 40 distinct subjects. Each image consists of 112 ?
92 = 10304 pixels whose value ranges from 0 to 255. We divided N = 400 images into r? = 40
clusters with the K-means++ and the AMP algorithm for the K-means clustering. We adopted the
initialization method of the K-means++ also for the AMP algorithm, because random initialization
often yielded empty clusters and almost all data were assigned to only one cluster. The parameter ?
was estimated in the way proposed in Subsection 4.3. We ran 50 trials with different initial values,
and Figure 2 summarizes the results.
The AMP algorithm for the K-means clustering outperformed the standard K-means++ algorithm
in 48 out of the 50 trials in terms of the K-means loss and in 47 trials in terms of the accuracy.
The AMP algorithm yielded just one cluster with all data assigned to it in two trials. The attained
minimum value of K-means loss is 0.412 with the K-means++ and 0.400 with the AMP algorithm.
The accuracies at these trials are 0.635 with the K-means++ and 0.690 with the AMP algorithm. The
average number of iterations was 6.6 with the K-means++ and 8.8 with the AMP algorithm. These
results demonstrate efficiency of the proposed algorithm on real data.
8
References
[1] P. Paatero, ?Least squares formulation of robust non-negative factor analysis,? Chemometrics and Intelligent Laboratory Systems, vol. 37, no. 1, pp. 23?35, May 1997.
[2] P. O. Hoyer, ?Non-negative matrix factorization with sparseness constraints,? The Journal of Machine
Learning Research, vol. 5, pp. 1457?1469, Dec. 2004.
[3] R. Salakhutdinov and A. Mnih, ?Bayesian probabilistic matrix factorization using Markov chain Monte
Carlo,? in Proceedings of the 25th International Conference on Machine Learning, New York, NY, Jul. 5?
Aug. 9, 2008, pp. 880?887.
[4] Y. J. Lim and Y. W. Teh, ?Variational Bayesian approach to movie rating prediction,? in Proceedings of
KDD Cup and Workshop, San Jose, CA, Aug. 12, 2007.
[5] T. Raiko, A. Ilin, and J. Karhunen, ?Principal component analysis for large scale problems with lots
of missing values,? in Machine Learning: ECML 2007, ser. Lecture Notes in Computer Science, J. N.
Kok, J. Koronacki, R. L. de Mantaras, S. Matwin, D. Mladeni?c, and A. Skowron, Eds. Springer Berlin
Heidelberg, 2007, vol. 4701, pp. 691?698.
[6] D. L. Donoho, A. Maleki, and A. Montanari, ?Message-passing algorithms for compressed sensing,?
Proceedings of the National Academy of Sciences USA, vol. 106, no. 45, pp. 18 914?18 919, Nov. 2009.
[7] S. Rangan, ?Generalized approximate message passing for estimation with random linear mixing,? in Proceedings of 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, Jul. 31?
Aug. 5, 2011, pp. 2168?2172.
[8] S. Rangan and A. K. Fletcher, ?Iterative estimation of constrained rank-one matrices in noise,? in Proceedings of 2012 IEEE International Symposium on Information Theory, Cambridge, MA, Jul. 1?6, 2012,
pp. 1246?1250.
[9] R. Matsushita and T. Tanaka, ?Approximate message passing algorithm for low-rank matrix reconstruction,? in Proceedings of the 35th Symposium on Information Theory and its Applications, Oita, Japan,
Dec. 11?14, 2012, pp. 314?319.
[10] W. Xu, X. Liu, and Y. Gong, ?Document clustering based on non-negative matrix factorization,? in Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, Toronto, Canada, Jul. 28?Aug. 1, 2003, pp. 267?273.
[11] C. Ding, T. Li, and M. Jordan, ?Convex and semi-nonnegative matrix factorizations,? IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 45?55, Jan. 2010.
[12] S. P. Lloyd, ?Least squares quantization in PCM,? IEEE Transactions on Information Theory, vol. IT-28,
no. 2, pp. 129?137, Mar. 1982.
[13] F. Krzakala, M. M?ezard, and L. Zdeborov?a, ?Phase diagram and approximate message passing for blind
calibration and dictionary learning,? preprint, Jan. 2013, arXiv:1301.5898v1 [cs.IT].
[14] J. T. Parker, P. Schniter, and V. Cevher, ?Bilinear generalized approximate message passing,? preprint,
Oct. 2013, arXiv:1310.2632v1 [cs.IT].
[15] S. Nakajima and M. Sugiyama, ?Theoretical analysis of Bayesian matrix factorization,? Journal of Machine Learning Research, vol. 12, pp. 2583?2648, Sep. 2011.
[16] D. Arthur and S. Vassilvitskii, ?k-means++: the advantages of careful seeding,? in SODA ?07 Proceedings
of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, Louisiana, Jan. 7?9,
2007, pp. 1027?1035.
[17] J. S. Yedidia, W. T. Freeman, and Y. Weiss, ?Constructing free-energy approximations and generalized
belief propagation algorithms,? IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2282?2312,
Jul. 2005.
[18] M. Bayati and A. Montanari, ?The dynamics of message passing on dense graphs, with applications to
compressed sensing,? IEEE Transactions on Information Theory, vol. 57, no. 2, pp. 764?785, Feb. 2011.
[19] H. W. Kuhn, ?The Hungarian method for the assignment problem,? Naval Research Logistics Quarterly,
vol. 2, no. 1?2, pp. 83?97, Mar. 1955.
[20] F. S. Samaria and A. C. Harter, ?Parameterisation of a stochastic model for human face identification,? in
Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota FL, Dec. 1994, pp.
138?142. [Online]. Available: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
9
| 5074 |@word trial:9 norm:1 nd:1 km:9 bvt:3 covariance:2 thereby:1 tr:1 initial:7 liu:1 contains:1 document:1 amp:53 outperforms:2 existing:1 com:1 si:4 gmail:1 additive:1 numerical:5 kdd:1 analytic:1 seeding:1 update:4 maxv:1 aside:1 intelligence:1 nearness:1 toronto:1 five:3 mathematical:1 symposium:4 incorrect:1 consists:1 ilin:1 introduce:1 krzakala:1 manner:3 frequently:1 v1t:1 salakhutdinov:1 freeman:1 begin:3 maximizes:1 kind:1 interpreted:1 developed:1 unified:1 petersburg:1 guarantee:1 esti:1 temporal:1 every:1 exactly:1 rm:9 scaled:2 ser:1 uk:1 positive:5 btv:5 local:5 treat:4 understood:1 limit:5 bilinear:1 meet:1 approximately:3 initialization:3 factorization:13 limited:1 graduate:1 range:2 practical:1 orleans:1 practice:1 differs:1 jan:3 vbmf:5 thought:1 get:1 context:1 applying:2 descending:1 www:1 map:28 deterministic:2 shi:1 center:6 missing:1 yoshida:1 go:1 convex:4 sigir:1 formulate:1 factored:1 regarded:2 deriving:2 suppose:5 heavily:1 exact:1 us:1 expensive:1 approximated:4 database:1 observed:3 preprint:2 ding:1 solved:3 worst:1 calculate:5 improper:2 removed:1 highest:1 ran:3 ui:5 cam:1 dynamic:2 ezard:1 motivate:1 solving:1 efficiency:3 matwin:1 easily:1 sep:1 various:1 derivation:1 samaria:1 distinct:1 fast:1 monte:3 artificial:2 whose:4 supplementary:2 solve:2 plausible:1 compressed:3 maxp:1 noisy:2 superscript:1 final:1 online:1 advantage:3 rr:12 differentiable:2 reconstruction:16 propose:2 product:1 tu:7 mixing:1 flexibility:2 academy:1 frobenius:1 harter:1 getting:1 chemometrics:1 convergence:5 cluster:20 requirement:1 empty:1 converges:1 help:2 ac:2 gong:1 school:1 lowrank:1 aug:4 hungarian:2 involves:4 c:2 met:1 concentrate:1 kuhn:1 mmap:1 tokyo:1 stochastic:2 human:2 material:2 fix:1 particularized:1 proposition:3 tjt:4 extension:1 hold:1 considered:1 exp:4 maxu:2 fletcher:1 predict:1 vary:1 achieves:4 dictionary:2 smallest:1 estimation:9 outperformed:1 successfully:1 minimization:8 gaussian:6 mation:1 rather:1 avoid:1 derived:6 naval:1 rank:19 likelihood:1 contrast:1 sense:2 posteriori:1 inference:3 el:11 stopping:1 facedatabase:1 typically:1 a0:7 pixel:1 arg:12 among:5 html:1 hon:1 denoted:2 priori:2 development:1 constrained:1 integration:3 special:2 s1t:1 marginal:8 construct:1 sampling:1 represents:1 minimized:1 others:1 piecewise:1 intelligent:1 randomly:1 divergence:3 national:1 replaced:1 phase:1 n1:1 message:13 mnih:1 btu:4 nl:2 semidefinite:4 tj:4 chain:2 implication:2 integral:1 schniter:1 arthur:1 respective:1 logarithm:1 initialized:2 theoretical:2 cevher:1 instance:2 column:1 formalism:1 assignment:9 maximization:2 cost:7 deviation:2 expects:1 uniform:2 conducted:2 corrupted:1 st:1 density:1 international:4 siam:1 bu:1 probabilistic:1 informatics:1 corrects:1 squared:2 reflect:1 satisfied:1 reconstructs:2 russia:1 li:1 japan:3 account:1 de:1 semidefiniteness:1 lloyd:12 includes:1 skowron:1 inc:1 blind:2 depends:2 performed:1 view:1 lot:1 observing:1 bayes:8 jul:5 contribution:2 minimize:1 square:2 accuracy:21 variance:3 efficiently:1 correspond:1 yield:1 toshiyuki:1 bayesian:14 identification:1 iterated:1 accurately:1 carlo:3 randomness:1 converged:3 ed:1 energy:1 pp:17 proof:3 dataset:1 popular:2 knowledge:2 lim:2 subsection:1 sophisticated:1 reflecting:1 appears:1 exceeded:1 attained:1 follow:1 attarchive:1 wei:1 formulation:3 ljt:2 evaluated:1 mar:2 just:1 implicit:1 until:3 hand:1 propagation:5 defines:1 mode:1 aj:10 vj0:1 usa:1 validity:1 requiring:2 true:5 normalized:6 maleki:1 evolution:4 hence:1 assigned:8 reformulating:1 former:1 symmetric:1 leibler:1 iteratively:2 elt:2 laboratory:1 deal:3 shinanomachi:2 generalized:4 criterion:1 tt:1 demonstrate:1 tn:1 performs:1 image:4 variational:12 consideration:1 novel:1 recently:1 common:1 jp:1 extend:1 belong:1 approximates:1 relating:1 measurement:1 cup:1 cambridge:1 rankedness:3 sugiyama:1 calibration:2 similarity:1 v0:27 pu:2 feb:1 posterior:13 multivariate:2 certain:2 success:1 arbitrarily:1 exploited:2 minimum:8 additional:4 employed:1 converge:2 signal:1 u0:21 ii:2 semi:1 desirable:1 kyoto:3 ntt:1 calculation:4 long:1 retrieval:1 divided:2 e1:8 a1:2 prediction:1 vision:1 expectation:1 arxiv:2 iteration:15 represent:2 nakajima:1 dec:3 justified:1 addition:3 diagram:1 extra:1 rest:1 biased:1 subject:2 effectiveness:1 jordan:1 call:3 structural:7 noting:1 enough:1 variety:1 independence:1 marginalization:6 restrict:1 idea:3 vassilvitskii:1 expression:1 penalty:1 passing:13 hessian:1 york:1 generally:2 detailed:2 aimed:1 kok:1 ten:1 dtg:1 tth:1 reduced:1 http:1 exist:1 estimated:5 correctly:1 per:2 discrete:3 vol:10 key:1 four:1 reformulation:1 v1:2 asymptotically:1 graph:1 fraction:1 sum:1 jose:1 everywhere:2 soda:1 throughout:1 almost:4 uti:1 vn:1 oscillation:1 t1t:1 summarizes:1 scaling:1 vb:3 comparable:1 orl:1 fl:1 matsushita:2 datum:6 guaranteed:1 yielded:2 annual:2 nonnegative:1 bv:1 constraint:4 infinity:2 rangan:2 speed:1 min:7 optimality:1 pseudometric:1 relatively:1 department:1 tv:6 according:3 alternate:2 smaller:1 reconstructing:3 wi:1 parameterisation:1 s1:1 explained:1 dv:2 intuitively:2 sarasota:1 taken:2 computationally:2 equation:5 ntl:5 remains:1 vjt:11 discus:2 needed:5 know:1 letting:3 ut1:1 adopted:2 available:5 rewritten:1 yedidia:1 apply:1 quarterly:1 appropriate:1 appearing:2 slower:1 original:1 denotes:2 clustering:34 running:2 exploit:1 approximating:2 objective:2 quantity:1 dependence:2 hoyer:1 zdeborov:1 distance:2 berlin:1 extent:1 toward:1 assuming:1 ratio:1 equivalently:1 difficult:1 negative:6 a2ij:1 implementation:1 proper:2 perform:3 allowing:1 teh:1 av:2 observation:7 markov:2 sm:2 ecml:1 logistics:1 rn:4 tnt:1 canada:1 rating:1 introduced:1 required:1 kl:3 temporary:1 tanaka:2 informaion:1 address:1 bar:1 usually:1 pattern:1 sparsity:1 max:4 belief:5 mn:3 representing:1 improve:1 movie:1 inversely:1 raiko:1 negativity:1 prior:9 loss:12 expect:1 permutation:1 lecture:1 proportional:2 bayati:1 integrate:1 row:2 repeat:1 last:1 free:1 jth:1 side:1 allow:1 bias:1 wide:1 taking:2 face:3 koronacki:1 sparse:1 regard:1 calculated:2 stuck:2 made:1 san:1 transaction:4 reconstructed:1 approximate:10 obtains:1 emphasize:1 nov:1 kullback:1 global:4 assumed:3 iterative:3 duk:1 ku:2 bethe:2 robust:1 ca:1 heidelberg:1 du:3 cl:1 meanwhile:1 constructing:1 vj:11 dense:1 montanari:2 terminated:1 whole:1 noise:5 icm:7 xu:1 representative:1 tl:5 parker:1 ny:1 pv:5 explicit:1 lie:1 third:1 theorem:1 bad:1 specific:1 er:8 sensing:3 insignificant:1 sit:4 workshop:2 quantization:1 execution:1 karhunen:1 sparseness:2 demand:1 likely:2 pcm:1 louisiana:1 springer:1 satisfies:2 acm:2 ma:11 oct:1 conditional:3 lth:2 identity:2 formulated:2 sized:1 kmeans:2 donoho:1 careful:1 hard:1 change:1 determined:1 except:1 principal:1 called:3 select:1 incorporate:2 evaluate:1 paatero:1 |
4,503 | 5,075 | Sign Cauchy Projections and Chi-Square Kernel
Ping Li
Dept of Statistics & Biostat.
Dept of Computer Science
Rutgers University
[email protected]
Gennady Samorodnitsky
ORIE and Dept of Stat. Science
Cornell University
Ithaca, NY 14853
[email protected]
John Hopcroft
Dept of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
Abstract
The method of stable random projections is useful for efficiently approximating
the l? distance (0 < ? ? 2) in high dimension and it is naturally suitable for data
streams. In this paper, we propose to use only the signs of the projected data and
we analyze the probability of collision (i.e., when the two signs differ). Interestingly, when ? = 1 (i.e., Cauchy random projections), we show that the probability
of collision can be accurately approximated as functions of the chi-square (?2 )
similarity. In text and vision applications, the ?2 similarity is a popular measure
when the features are generated from histograms (which are a typical example of
data streams). Experiments confirm that the proposed method is promising for
large-scale learning applications. The full paper is available at arXiv:1308.1009.
There are many future research problems. For example, when ? ? 0, the collision
probability is a function of the resemblance (of the binary-quantized data). This
provides an effective mechanism for resemblance estimation in data streams.
1
Introduction
High-dimensional representations have become very popular in modern applications of machine
learning, computer vision, and information retrieval. For example, Winner of 2009 PASCAL image
classification challenge used millions of features [29]. [1, 30] described applications with billion or
trillion features. The use of high-dimensional data often achieves good accuracies at the cost of a
significant increase in computations, storage, and energy consumptions.
Consider two data vectors (e.g., two images) u, v ? RD . A basic task is to compute their distance
or similarity. For example, the correlation (?2 ) and l? distance (d? ) are commonly used:
?D
D
?
ui vi
?2 (u, v) = ?? i=1 ?
,
d? (u, v) =
|ui ? vi |?
(1)
D
D
2
2
i=1
u
v
i=1 i
i=1 i
In this study, we are particularly interested in the ?2 similarity, denoted by ??2 :
??2 =
D
?
2ui vi
,
u
+ vi
i=1 i
where ui ? 0, vi ? 0,
D
?
ui =
i=1
D
?
vi = 1
(2)
i=1
The chi-square similarity is closely related to the chi-square distance d?2 :
d?2 =
D
?
(ui ? vi )2
i=1
ui + vi
=
D
D
?
?
4ui vi
(ui + vi ) ?
= 2 ? 2??2
u
+ vi
i=1
i=1 i
(3)
The chi-square similarity is an instance of the Hilbertian metrics, which are defined over probability
space [10] and suitable for data generated from histograms. Histogram-based features (e.g., bagof-word or bag-of-visual-word models) are extremely popular in computer vision, natural language
processing (NLP), and information retrieval. Empirical studies have demonstrated the superiority of
the ?2 distance over l2 or l1 distances for image and text classification tasks [4, 10, 13, 2, 28, 27, 26].
The method of normal random projections (i.e., ?-stable projections with ? = 2) has become
popular in machine learning (e.g., [7]) for reducing the data dimensions and data sizes, to facilitate
1
efficient computations of the l2 distances and correlations. More generally, the method of stable
random projections [11, 17] provides an efficient algorithm to compute the l? distances (0 < ? ? 2).
In this paper, we propose to use only the signs of the projected data after applying stable projections.
1.1 Stable Random Projections and Sign (1-Bit) Stable Random Projections
Consider two high-dimensional data vectors u, v ? RD . The basic idea of stable random projections
is to multiply u and v by a random matrix R ? RD?k : x = uR ? Rk , y = vR ? Rk , where entries
of R are i.i.d. samples from a symmetric ?-stable distribution with unit scale. By properties of
stable distributions, xj ? yj follows a symmetric ?-stable distribution with scale d? . Hence, the
task of computing d? boils down to estimating the scale d? from k i.i.d. samples. In this paper, we
propose to store only the signs of projected data and we study the probability of collision:
P? = Pr (sign(xj ) ?= sign(yj ))
(4)
Using only the signs (i.e., 1 bit) has significant advantages for applications in search and learning.
When ? = 2, this probability can be analytically evaluated [9] (or via a simple geometric argument):
1
P2 = Pr (sign(xj ) ?= sign(yj )) = cos?1 ?2
(5)
?
which is an important result known as sim-hash [5]. For ? < 2, the collision probability is an
open problem. When the data are nonnegative, this paper (Theorem 1) will prove a bound of P?
for general 0 < ? ? 2. The bound is exact at ? = 2 and becomes less sharp as ? moves away
from 2. Furthermore, for ? = 1 and nonnegative data, we have the interesting observation that the
probability P1 can be well approximated as functions of the ?2 similarity ??2 .
1.2 The Advantages of Sign Stable Random Projections
1. There is a significant saving in storage space by using only 1 bit instead of (e.g.,) 64 bits.
2. This scheme leads to an efficient linear algorithm (e.g., linear SVM). For example, a negative sign can be coded as ?01? and a positive sign as ?10? (i.e., a vector of length 2). With
k projections, we concatenate k short vectors to form a vector of length 2k. This idea is
inspired by b-bit minwise hashing [20], which was designed for binary sparse data.
3. This scheme also leads to an efficient near neighbor search algorithm [8, 12]. We can code
a negative sign by ?0? and positive sign by ?1? and concatenate k such bits to form a hash
table of 2k buckets. In the query phase, one only searches for similar vectors in one bucket.
1.3 Data Stream Computations
Stable random projections are naturally suitable for data streams. In modern applications, massive
datasets are often generated in a streaming fashion, which are difficult to transmit and store [22], as
the processing is done on the fly in one-pass of the data. In the standard turnstile model [22], a data
stream can be viewed as high-dimensional vector with the entry values changing over time.
(t)
Here, we denote a stream at time t by ui , i = 1 to D. At time t, a stream element (it , It )
(t)
(t?1)
arrives and updates the it -th coordinate as uit = uit
+ It . Clearly, the turnstile data stream
model is particularly suitable for describing histograms and it is also a standard model for network
traffic summarization and monitoring [31]. Because this stream model is linear, methods based on
linear projections (i.e., matrix-vector multiplications) can naturally handle streaming data of this
sort. Basically, entries of the projection matrix R ? RD?k are (re)generated as needed using
pseudo-random number techniques [23]. As (it , It ) arrives, only the entries in the it -th row, i.e.,
(t)
(t?1)
rit ,j , j = 1 to k, are (re)generated and the projected data are updated as xj = xj
+ It ? rit j .
Recall that, in the definition of ?2 similarity, the data are assumed to be normalized (summing to
1). For nonnegative streams, the sum can be computed error-free by using merely one counter:
?D (t)
?t
= s=1 Is . Thus we can still use, without loss of generality, the sum-to-one assumpi=1 ui
tion, even in the streaming environment. This fact was recently exploited by another data stream
algorithm named Compressed Counting (CC) [18] for estimating the Shannon entropy of streams.
Because the use of the ?2 similarity is popular in (e.g.,) computer vision, recently there are other
proposals for estimating the ?2 similarity. For example, [15] proposed a nice technique to approximate ??2 by first expanding the data from D dimensions to (e.g.,) 5 ? 10 ? D dimensions through
a nonlinear transformation and then applying normal random projections on the expanded data. The
nonlinear transformation makes their method not applicable to data streams, unlike our proposal.
2
For notational simplicity, we will drop the superscript (t) for the rest of the paper.
2
An Experimental Study of Chi-Square Kernels
We provide an experimental study to validate the use of ?2 similarity. Here, the ??2 -kernel? is
defined as K(u, v) = ??2 and the ?acos-?2 -kernel? as K(u, v) = 1 ? ?1 cos?1 ??2 . With a slight
abuse of terminology, we call both ??2 kernel? when it is clear in the context.
We use the ?precomputed kernel? functionality in LIBSVM on two datasets: (i) UCI-PEMS, with
267 training examples and 173 testing examples in 138,672 dimensions; (ii) MNIST-small, a subset
of the popular MNIST dataset, with 10,000 training examples and 10,000 testing examples.
The results are shown in Figure 1. To compare these two types of ?2 kernels with ?linear? kernel,
we also test the same data using LIBLINEAR [6] after normalizing the data to have unit Euclidian
norm, i.e., we basically use ?2 . For both LIBSVM and LIBLINEAR, we use l2 -regularization with
a regularization parameter C and we report the test errors for a wide range of C values.
100
PEMS
Classification Acc (%)
Classification Acc (%)
100
80
60
40
linear
20
?2
0 ?2
10
2
acos ?
?1
10
0
1
10
10
2
10
MNIST?Small
90
80
linear
acos ?2
60 ?2
10
3
10
C
?2
70
?1
10
0
10
C
1
10
2
10
Figure 1: Classification accuracies. C is the l2 -regularization parameter. We use LIBLINEAR
for ?linear? (i.e., ?2 ) kernel and LIBSVM ?precomputed kernel? for two types of ?2 kernels (??2 kernel? and ?acos-?2 -kernel?). For UCI-PEMS, the ?2 -kernel has better performance than the linear
kernel and acos-?2 -kernel. For MNIST-Small, both ?2 kernels noticeably outperform linear kernel.
Note that MNIST-small used the original MNIST test set and merely 1/6 of the original training set.
Here, we should state that it is not the intention of this paper to use these two small examples
to conclude the advantage of ?2 kernels over linear kernel. We simply use them to validate our
proposed method, which is general-purpose and is not limited to data generated from histograms.
3
Sign Stable Random Projections and the Collision Probability Bound
?D
?D
We apply stable random projections on two vectors u, v ? RD : x = i=1 ui ri , y = i=1 vi ri ,
ri ? S(?, 1), i.i.d. Here Z ? S(?, ?)
denotes) a symmetric ?-stable distribution with scale ?,
( ?
?
whose characteristic function [24] is E e ?1Zt = e??|t| . By properties of stable distributions,
( ?
)
D
we know x?y ? S ?, i=1 |ui ? vi |? . Applications including linear learning and near neighbor
search will benefit from sign ?-stable random projections. When ? = 2 (i.e. normal), the collision
probability Pr (sign(x) ?= sign(y)) is known [5, 9]. For ? < 2, it is a difficult probability problem.
This section provides a bound of Pr (sign(x) ?= sign(y)), which is fairly accurate for ? close to 2.
3.1 Collision Probability Bound
In this paper, we focus on nonnegative data (as common in practice). We present our first theorem.
Theorem 1 When the data are nonnegative, i.e., ui ? 0, vi ? 0, we have
?
?2/?
?D ?/2 ?/2
u
v
1
?
Pr (sign(x) ?= sign(y)) ? cos?1 ?? , where ?? = ? ?? i=1 i ? i
?
D
D
?
?
u
v
i=1 i
i=1 i
(6)
For ? = 2, this bound is exact [5, 9]. In fact the result for ? = 2 leads to the following Lemma:
Lemma 1 The kernel defined as K(u, v) = 1 ?
1
?
cos?1 ?2 is positive definite (PD).
Proof: The indicator function 1 {sign(x) = sign(y)} can be written as an inner product (hence PD)
and Pr (sign(x) = sign(y)) = E (1 {sign(x) = sign(y)}) = 1 ? ?1 cos?1 ?2 .
3
3.2 A Simulation Study to Verify the Bound of the Collision Probability
We generate the original data u and v by sampling from a bivariate t-distribution, which has two
parameters: the correlation and the number of degrees of freedom (which is taken to be 1 in our
experiments). We use a full range of the correlation parameter from 0 to 1 (spaced at 0.01). To
generate positive data, we simply take the absolute values of the generated data. Then we fix the
data as our original data (like u and v), apply sign stable random projections, and report the empirical
collision probabilities (after 105 repetitions).
Figure 2 presents the simulated collision probability Pr (sign(x) ?= sign(y)) for D = 100 and ? ?
{1.5, 1.2, 1.0, 0.5}. In each panel, the dashed curve is the theoretical upper bound ?1 cos?1 ?? , and
the solid curve is the simulated collision probability. Note that it is expected that the simulated data
can not cover the entire range of ?? values, especially as ? ? 0.
0.3
0.2
0.1
0
0.2
? = 1.5, D = 100
0.4
0.6
??
0.8
0.5
0.4
0.3
0.2
0.1
0
0.4
1
? = 1.2, D = 100
0.6
??
0.8
0.4
0.3
0.2
0.1
0
0.4
1
0.5
Collision probability
0.4
Collision probability
0.5
Collision probability
Collision probability
0.5
? = 1, D = 100
0.6
0.8
??
0.4
0.3
0.2
0.1
0
0.7
1
? = 0.5, D = 100
0.8
0.9
??
1
Figure 2: Dense Data and D = 100. Simulated collision probability Pr (sign(x) ?= sign(y)) for
sign stable random projections. In each panel, the dashed curve is the upper bound ?1 cos?1 ?? .
Figure 2 verifies the theoretical upper bound ?1 cos?1 ?? . When ? ? 1.5, this upper bound is fairly
sharp. However, when ? ? 1, the bound is not tight, especially for small ?. Also, the curves of the
empirical collision probabilities are not smooth (in terms of ?? ).
Real-world high-dimensional datasets are often sparse. To verify the theoretical upper bound of
the collision probability on sparse data, we also simulate sparse data by randomly making 50% of
the generated data as used in Figure 2 be zero. With sparse data, it is even more obvious that the
theoretical upper bound ?1 cos?1 ?? is not sharp when ? ? 1, as shown in Figure 3.
0.3
0.2
0.1
0
0
? = 1.5, D = 100, Sparse
0.2
0.4
??
0.6
0.8
1
0.5
0.4
0.3
0.2
0.1
0
0
? = 1.2, D = 100, Sparse
0.2
0.4
??
0.6
0.8
1
0.5
Collision probability
0.4
Collision probability
0.5
Collision probability
Collision probability
0.5
0.4
0.3
0.2
0.1
0
0
? = 1, D = 100, Sparse
0.2
0.4
??
0.6
0.8
1
0.4
0.3
0.2
0.1
0
0
? = 0.5, D = 100, Sparse
0.1
0.2
??
0.3
0.4
Figure 3: Sparse Data and D = 100. Simulated collision probability Pr (sign(x) ?= sign(y)) for
sign stable random projection. The upper bound is not tight especially when ? ? 1.
In summary, the collision probability bound: Pr (sign(x) ?= sign(y)) ? ?1 cos?1 ?? is fairly sharp
when ? is close to 2 (e.g., ? ? 1.5). However, for ? ? 1, a better approximation is needed.
4 ? = 1 and Chi-Square (?2 ) Similarity
In this section, we focus on nonnegative data (ui ? 0, vi ? 0) and ? = 1. This case is important in
practice. For example, we can view the data (ui , vi ) as empirical probabilities, which are common
when data are generated from histograms (as popular in NLP and vision) [4, 10, 13, 2, 28, 27, 26].
?D
?D
In this context, we always normalize the data, i.e., i=1 ui = i=1 vi = 1. Theorem 1 implies
(D
)2
? 1/2 1/2
1
?1
Pr (sign(x) ?= sign(y)) ? cos ?1 , where ?1 =
ui vi
(7)
?
i=1
While the bound is not tight, interestingly, the collision probability can be related to the ?2 similarity.
2
?D
i)
Recall the definitions of the chi-square distance d?2 = i=1 (uuii?v
+vi and the chi-square similarity
?D
i vi
. In this context, we should view 00 = 0.
??2 = 1 ? 12 d?2 = i=1 u2u
i +vi
4
Lemma 2 Assume ui ? 0, vi ? 0,
??2 =
?D
i=1
ui = 1,
D
?
2ui vi
u
+ vi
i=1 i
?D
vi = 1. Then
(D
)2
? 1/2 1/2
? ?1 =
u i vi
i=1
(8)
i=1
It is known that the ?2 -kernel is PD [10]. Consequently, we know the acos-?2 -kernel is also PD.
Lemma 3 The kernel defined as K(u, v) = 1 ?
1
?
cos?1 ??2 is positive definite (PD).
The remaining question is how to connect Cauchy random projections with the ?2 similarity.
5 Two Approximations of Collision Probability for Sign Cauchy Projections
It is a difficult problem to derive the collision probability of sign Cauchy projections if we would
like to express the probability only in terms of certain summary statistics (e.g., some distance). Our
first observation is that the collision probability can be well approximated using the ?2 similarity:
( )
1
Pr (sign(x) ?= sign(y)) ? P?2 (1) = cos?1 ??2
(9)
?
Figure 4 shows this approximation
is better than ?1 cos?1 (?1 ). Particularly, in sparse data, the
( )
1
?1
??2 is very accurate (except when ??2 is close to 1), while the bound
approximation ? cos
1
?1
cos
(?
)
is
not
sharp
(and the curve is not smooth in ?1 ).
1
?
0.5
Collision probability
Collision probability
0.5
0.4
0.3
2
?
1
0.2
0.1
0
0.4
? = 1, D = 100
0.6
??2, ?1
0.8
0.4
0.3
0.2
0.1
? = 1, D = 100, Sparse
0
0
1
0.2
0.4
0.6
??2, ?1
1
0.8
2
?
1
Figure 4: The dashed curve is ?1 cos?1 (?), where ? can be ?1 or ??2 depending on the context. In
each panel, the two solid curves are the empirical collision probabilities in terms of ?1 (labeled by
?1?) or ??2 (labeled by ??2 ). It is clear that the proposed approximation ?1 cos?1 ??2 in (9) is more
tight than the upper bound ?1 cos?1 ?1 , especially so in sparse data.
Our second (and less obvious) approximation is the following integral:
)
(
? ?/2
??2
1
2
?1
tan t dt
Pr (sign(x) ?= sign(y)) ? P?2 (2) = ? 2
tan
2 ? 0
2 ? 2??2
(10)
Figure 5 illustrates that, for dense data, the second approximation (10) is more accurate than the
first (9). The second approximation (10) is also accurate for sparse data. Both approximations,
P?2 (1) and P?2 (2) , are monotone functions of ??2 . In practice, we often do not need the ??2 values
explicitly because it often suffices if the collision probability is a monotone function of the similarity.
5.1 Binary Data
Interestingly, when the data are binary (before normalization), we can compute the collision probability exactly, which allows us to analytically assess the accuracy of the approximations. In fact,
this case inspired us to propose the second approximation (10), which is otherwise not intuitive.
For convenience, we define a = |Ia |, b = |Ib |, c = |Ic |, where
Ia = {i|ui > 0, vi = 0},
Ib = {i|vi > 0, ui = 0},
Ic = {i|ui > 0, vi > 0},
(11)
Assume binary data (before normalization, i.e., sum to one). That is,
1
1
1
1
=
, ?i ? Ia ? Ic ,
vi =
=
, ?i ? Ib ? Ic (12)
ui =
|Ia | + |Ic |
a+c
|Ib | + |Ic |
b+c
?D
??2
2c
c
i vi
The chi-square similarity ??2 becomes ??2 = i=1 u2u
= a+b+2c
and hence 2?2?
= a+b
.
2
i +vi
?
5
0.5
? = 1, D = 100
Collision probability
Collision probability
0.5
0.4
0.3
0.2
0.1
?2 (1)
2
? (2)
Empirical
0
0.4
0.6
0.8
??2
? = 1, D = 100, Sparse
0.4
0.3
0.2
?2 (1)
0.1
? (2)
Empirical
2
0
0
1
0.2
0.4
??2
0.6
0.8
1
Figure 5: Comparison of two approximations: ?2 (1) based on (9) and ?2 (2) based on (10). The
solid curves (empirical probabilities expressed in terms of ??2 ) are the same solid curves labeled
??2 ? in Figure 4. The left panel shows that the second approximation (10) is more accurate in dense
data. The right panel illustrate that both approximations are accurate in sparse data. (9) is slightly
more accurate at small ??2 and (10) is more accurate at ??2 close to 1.
Theorem 2 Assume binary data. When ? = 1, the exact collision probability is
( c )}
(c )
2 {
1
|R| tan?1 |R|
Pr (sign(x) ?= sign(y)) = ? 2 E tan?1
2 ?
a
b
(13)
where R is a standard Cauchy random variable.
{
(
)}
(
(
{
)
)}
c
When a = 0 or b = 0, we have E tan?1 ac |R| tan?1 cb |R| = ?2 E tan?1 a+b
|R| . This
observation inspires us to propose the approximation (10):
{
(
)}
)
(
? ?/2
1
1
c
1
2
c
?1
?1
P?2 (2) = ? E tan
|R|
= ? 2
tan t dt
tan
2 ?
a+b
2 ? 0
a+b
To validate this approximation for binary data, we study the difference between (13) and (10), i.e.,
Z(a/c, b/c) = Err = Pr (sign(x) ?= sign(y)) ? P?2 (2)
{
(
)
(
)}
{
(
)}
1
1
1
1
2
|R| tan?1
|R|
+ E tan?1
|R|
= ? 2 E tan?1
?
a/c
b/c
?
a/c + b/c
(14)
(14) can be easily computed by simulations. Figure 6 confirms that the errors are larger than zero
and very small . The maximum error is smaller than 0.0192, as proved in Lemma 4.
2
10
0.02
1
10
0.015
0
Z(t)
b/c
0.019
10
0.01
0.01
?1
10
0.005
0.001
?2
10 ?2
10
?1
10
0
10
a/c
1
10
0 ?2
10
2
10
?1
10
0
1
10
10
2
10
3
10
t
Figure 6: Left panel: contour plot for the error Z(a/c, b/c) in (14). The maximum error (which is
< 0.0192) occurs along the diagonal line. Right panel: the diagonal curve of Z(a/c, b/c).
Lemma 4 The error defined in (14) ranges between 0 and Z(t? ):
? ?{
( r ))2 1
( r )} 2 1
2 (
+ tan?1
dr (15)
0 ? Z(a/c, b/c) ? Z(t? ) =
? 2 tan?1 ?
?
t
?
2t?
? 1 + r2
0
where t? = 2.77935 is the solution to
1
t2 ?1
log
2t
1+t
6
=
log(2t)
(2t)2 ?1 .
Numerically, Z(t? ) = 0.01919.
0.5
0.03
0.4
0.02
?2(1)
0.2
0
?0.01
2
? (2)
0.1
0
0
?2(2)
0.01
0.3
Error
Collision probability
5.2 An Experiment Based on 3.6 Million English Word Pairs
To further validate the two ?2 approximations (in non-binary data), we experiment with a word
occurrences dataset (which is an example of histogram data) from a chunk of D = 216 web crawl
documents. There are in total 2,702 words, i.e., 2,702 vectors and 3,649,051 word pairs. The entries
of a vector are the occurrences of the word. This is a typical sparse, non-binary dataset. Interestingly,
the errors of the collision probabilities based on two ?2 approximations are still very small. To report
the results, we apply sign Cauchy random projections 107 times to evaluate the approximation errors
of (9) and (10). The results, as presented in Figure 7, again confirm that the upper bound ?1 cos?1 ?1
is not tight and both ?2 approximations, P?2 (1) and P?2 (2) , are accurate.
?2(1)
?0.02
0.2
0.4
0.6
??2 or ?1
0.8
?0.03
0
1
0.2
0.4
??2
0.6
0.8
1
Figure 7: Empirical collision probabilities for 3.6 million English word pairs. In the left panel,
we plot the empirical collision probabilities against ?1 (lower, green if color is available) and ??2
(higher, red). The curves confirm that the bound ?1 cos?1 ?1 is not tight (and the curve is not smooth).
We plot the two ?2 approximations as dashed curves which largely match the empirical probabilities
plotted against ??2 , confirming that the ?2 approximations are good. For smaller ??2 values, the
first approximation P?2 (1) is slightly more accurate. For larger ??2 values, the second approximation
P?2 (2) is more accurate. In the right panel, we plot the errors for both P?2 (1) and P?2 (2) .
6
Sign Cauchy Random Projections for Classification
Our method provides an effective strategy for classification. For each (high-dimensional) data vector, using k sign Cauchy projections, we encode a negative sign as ?01? and a positive as ?10? (i.e.,
a vector of length 2) and concatenate k short vectors to form a new feature vector of length 2k. We
then feed the new data into a linear classifier (e.g., LIBLINEAR). Interestingly, this linear classifier
approximates a nonlinear kernel classifier based on acos-?2 -kernel: K(u, v) = 1? ?1 cos?1 ??2 . See
Figure 8 for the experiments on the same two datasets in Figure 1: UCI-PEMS and MNIST-Small.
100
80
k = 2048,4096,8192
Classification Acc (%)
Classification Acc (%)
100
1024
512
60
k = 256
k = 128
40
k = 64
k = 32
20
0 ?2
10
PEMS: SVM
?1
10
0
1
10
10
2
10
10
C
1024
2048
k = 128
80
k = 64
70
MNIST?Small: SVM
60 ?2
10
3
k = 4096, 8192
k = 512
k = 256
90
?1
10
0
10
C
1
10
2
10
Figure 8: The two dashed (red if color is available) curves are the classification results obtained
using ?acos-?2 -kernel? via the ?precomputed kernel? functionality in LIBSVM. The solid (black)
curves are the accuracies using k sign Cauchy projections and LIBLINEAR. The results confirm
that the linear kernel from sign Cauchy projections can approximate the nonlinear acos-?2 -kernel.
Figure 1 has already shown that, for the UCI-PEMS dataset, the ?2 -kernel (??2 ) can produce noticeably better classification results than the acos-?2 -kernel (1 ? ?1 cos?1 ??2 ). Although our method
does not directly approximate ??2 , we can still estimate ??2 by assuming the collision probability
is exactly Pr (sign(x) ?= sign(y)) = ?1 cos?1 ??2 and then we can feed the estimated ??2 values
into LIBSVM ?precomputed kernel? for classification. Figure 9 verifies that this method can also
approximate the ?2 kernel with enough projections.
7
PEMS: ?2 kernel SVM
80
60
40
20
0 ?2
10
?1
10
0
1
10
10
100
Classification Acc (%)
Classification Acc (%)
100
k = 8192
k = 4096
k = 2048
k = 1024
k = 512
k = 256
k = 128
k = 64
k = 32
2
10
MNIST?Small: ?2 kernel SVM
k = 256
80
70
60 ?2
10
3
10
C
k = 1024 2048 4096 8192
90
k = 512
k = 128
k = 64
?1
10
0
10
C
1
10
2
10
Figure 9: Nonlinear kernels. The dashed curves are the classification results obtained using ?2 kernel and LIBSVM ?precomputed kernel? functionality. We apply k sign Cauchy projections and
estimate ??2 assuming the collision probability is exactly ?1 cos?1 ??2 and then feed the estimated
??2 into LIBSVM again using the ?precomputed kernel? functionality.
7
Conclusion
The use of ?2 similarity is widespread in machine learning, especially when features are generated
from histograms, as common in natural language processing and computer vision. Many prior studies [4, 10, 13, 2, 28, 27, 26] have shown the advantage of using ?2 similarity compared to other
measures such as l2 distance. However, for large-scale applications with ultra-high-dimensional
datasets, using ?2 similarity becomes challenging for practical reasons. Simply storing (and maneuvering) all the high-dimensional features would be difficult if there are a large number of observations. Computing all pairwise ?2 similarities can be time-consuming and in fact we usually can not
materialize an all-pairwise similarity matrix even if there are merely 106 data points. Furthermore,
the ?2 similarity is nonlinear, making it difficult to take advantage of modern linear algorithms
which are known to be very efficient, e.g., [14, 25, 6, 3]. When data are generated in a streaming
fashion, computing ?2 similarities without storing the original data will be even more challenging.
The method of ?-stable random projections (0 < ? ? 2) [11, 17] is popular for efficiently computing the l? distances in massive (streaming) data. We propose sign stable random projections by
storing only the signs (i.e., 1-bit) of the projected data. Obviously, the saving in storage would be
a significant advantage. Also, these bits offer the indexing capability which allows efficient search.
For example, we can build hash tables using the bits to achieve sublinear time near neighbor search
(although this paper does not focus on near neighbor search). We can also build efficient linear
classifiers using these bits, for large-scale high-dimensional machine learning applications.
A crucial task in analyzing sign stable random projections is to study the probability of collision (i.e.,
when the two signs differ). We derive a theoretical bound of the collision probability which is exact
when ? = 2. The bound is fairly sharp for ? close to 2. For ? = 1 (i.e., Cauchy random projections), we find the ?2 approximation is significantly more accurate. In addition, for binary data, we
analytically show that the errors from using the ?2 approximation are less than 0.0192. Experiments
on real and simulated data confirm that our proposed ?2 approximations are very accurate.
We are enthusiastic about the practicality of sign stable projections in learning and search applications. The previous idea of using the signs from normal random projections has been widely adopted
in practice, for approximating correlations. Given the widespread use of the ?2 similarity and the
simplicity of our method, we expect the proposed method will be adopted by practitioners.
Future research Many interesting future research topics can be studied. (i) The processing cost
of conducting stable random projections can be dramatically reduced by very sparse stable random
projections [16]. This will make our proposed method even more practical. (ii) We can try to utilize
more than just 1-bit of the projected data, i.e., we can study the general coding problem [19]. (iii)
Another interesting research would be to study the use of sign stable projections for sparse signal
recovery (Compressed Sensing) with stable distributions [21]. (iv) When ? ? 0, the collision
probability becomes Pr (sign(x) ?= sign(y)) = 12 ? 12 Resemblance, which provides an elegant
mechanism for computing resemblance (of the binary-quantized data) in sparse data streams.
Acknowledgement The work of Ping Li is supported by NSF-III-1360971, NSF-Bigdata1419210, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. The work of Gennady
Samorodnitsky is supported by ARO-W911NF-12-10385.
8
References
[1] http://googleresearch.blogspot.com/2010/04/ lessons-learned-developing-practical.html.
[2] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. What is an object? In CVPR, pages 73?80, 2010.
[3] Leon Bottou. http://leon.bottou.org/projects/sgd.
[4] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based
image classification. IEEE Trans. Neural Networks, 10(5):1055?1064, 1999.
[5] Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[6] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library
for large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[7] Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. Learning the structure of manifolds
using random projections. In NIPS, Vancouver, BC, Canada, 2008.
[8] Jerome H. Friedman, F. Baskett, and L. Shustek. An algorithm for finding nearest neighbors. IEEE
Transactions on Computers, 24:1000?1006, 1975.
[9] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and
satisfiability problems using semidefinite programming. Journal of ACM, 42(6):1115?1145, 1995.
[10] Matthias Hein and Olivier Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136?143, Barbados, 2005.
[11] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation.
Journal of ACM, 53(3):307?323, 2006.
[12] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604?613, Dallas, TX, 1998.
[13] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization
and semantic video retrieval. In CIVR, pages 494?501, Amsterdam, Netherlands, 2007.
[14] Thorsten Joachims. Training linear svms in linear time. In KDD, pages 217?226, Pittsburgh, PA, 2006.
[15] Fuxin Li, Guy Lebanon, and Cristian Sminchisescu. A linear approximation to the ?2 kernel with geometric convergence. Technical report, arXiv:1206.4074, 2013.
[16] Ping Li. Very sparse stable random projections for dimension reduction in l? (0 < ? ? 2) norm. In
KDD, San Jose, CA, 2007.
[17] Ping Li. Estimators and tail bounds for dimension reduction in l? (0 < ? ? 2) using stable random
projections. In SODA, pages 10 ? 19, San Francisco, CA, 2008.
[18] Ping Li. Improving compressed counting. In UAI, Montreal, CA, 2009.
[19] Ping Li, Michael Mitzenmacher, and Anshumali Shrivastava. Coding for random projections. 2013.
[20] Ping Li, Art B Owen, and Cun-Hui Zhang. One permutation hashing. In NIPS, Lake Tahoe, NV, 2012.
[21] Ping Li, Cun-Hui Zhang, and Tong Zhang. Compressed counting meets compressed sensing. 2013.
[22] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical
Computer Science, 1:117?236, 2 2005.
[23] Noam Nisan. Pseudorandom generators for space-bounded computations. In STOC, 1990.
[24] Gennady Samorodnitsky and Murad S. Taqqu. Stable Non-Gaussian Random Processes. Chapman &
Hall, New York, 1994.
[25] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814, Corvalis, Oregon, 2007.
[26] Andrea Vedaldi and Andrew Zisserman. Efficient additive kernels via explicit feature maps. IEEE Trans.
Pattern Anal. Mach. Intell., 34(3):480?492, 2012.
[27] Sreekanth Vempati, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Generalized rbf feature maps
for efficient detection. In BMVC, pages 1?11, Aberystwyth, UK, 2010.
[28] Gang Wang, Derek Hoiem, and David A. Forsyth. Building text features for object image classification.
In CVPR, pages 1367?1374, Miami, Florida, 2009.
[29] Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas S. Huang, and Yihong Gong. Localityconstrained linear coding for image classification. In CVPR, pages 3360?3367, San Francisco, CA, 2010.
[30] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing
for large scale multitask learning. In ICML, pages 1113?1120, 2009.
[31] Haiquan (Chuck) Zhao, Nan Hua, Ashwin Lall, Ping Li, Jia Wang, and Jun Xu. Towards a universal sketch
for origin-destination network measurements. In Network and Parallel Computing, pages 201?213, 2011.
9
| 5075 |@word multitask:1 norm:2 open:1 confirms:1 simulation:2 hsieh:1 gennady:3 sgd:1 euclidian:1 solid:5 reduction:2 liblinear:6 hoiem:1 document:1 interestingly:5 bc:1 err:1 com:1 written:1 john:2 concatenate:3 additive:1 confirming:1 kdd:2 designed:1 drop:1 update:1 plot:4 hash:3 short:2 fa9550:1 provides:5 quantized:2 tahoe:1 org:1 zhang:3 along:1 become:2 prove:1 baskett:1 pairwise:2 expected:1 andrea:2 p1:1 enthusiastic:1 chi:10 inspired:2 curse:1 solver:1 becomes:4 project:1 estimating:3 bounded:1 panel:9 what:1 finding:1 transformation:2 pseudo:1 yugang:1 exactly:3 classifier:4 uk:1 unit:2 superiority:1 positive:7 before:2 dallas:1 mach:1 analyzing:1 jiang:1 meet:1 abuse:1 black:1 studied:1 challenging:2 co:25 limited:1 range:4 practical:3 yj:3 testing:2 practice:4 definite:3 universal:1 empirical:11 significantly:1 vedaldi:2 projection:45 word:8 intention:1 jui:1 chongwah:1 convenience:1 close:5 pegasos:1 storage:3 context:4 applying:2 vittorio:1 demonstrated:1 map:2 simplicity:2 recovery:1 estimator:1 handle:1 ferrari:1 coordinate:1 transmit:1 updated:1 tan:15 massive:2 exact:4 olivier:2 programming:1 origin:1 pa:1 element:1 trend:1 approximated:3 particularly:3 cut:1 labeled:3 fly:1 wang:4 kilian:1 maneuvering:1 counter:1 environment:1 pd:5 ui:25 taqqu:1 tight:6 easily:1 hopcroft:1 tx:1 muthukrishnan:1 effective:2 query:1 shalev:1 whose:1 larger:2 widely:1 cvpr:3 kai:2 otherwise:1 compressed:5 statistic:2 superscript:1 indyk:2 obviously:1 cristian:1 advantage:6 matthias:1 propose:6 aro:1 product:1 uci:4 achieve:1 intuitive:1 shustek:1 validate:4 normalize:1 billion:1 convergence:1 motwani:1 produce:1 categorization:1 object:3 bogdan:1 derive:2 depending:1 illustrate:1 gong:1 stat:2 ac:1 montreal:1 nearest:2 andrew:2 sim:1 p2:1 c:1 implies:1 differ:2 closely:1 functionality:4 noticeably:2 fix:1 suffices:1 civr:1 ultra:1 rong:1 miami:1 hall:1 ic:6 normal:4 cb:1 alexe:1 achieves:1 purpose:1 estimation:2 applicable:1 bag:2 jawahar:1 repetition:1 kabra:1 clearly:1 anshumali:1 always:1 gaussian:1 cornell:4 deselaers:1 encode:1 focus:3 u2u:2 joachim:1 notational:1 streaming:5 entire:1 interested:1 classification:19 html:1 pascal:1 denoted:1 hilbertian:2 art:1 fairly:4 saving:2 piotr:2 sampling:1 chapman:1 yu:1 icml:2 future:3 report:4 t2:1 modern:3 randomly:1 intell:1 phase:1 friedman:1 freedom:1 detection:1 multiply:1 arrives:2 semidefinite:1 primal:1 accurate:13 integral:1 iv:1 re:2 plotted:1 hein:1 theoretical:6 instance:1 cover:1 w911nf:1 yoav:1 cost:2 entry:5 subset:1 rounding:1 inspires:1 connect:1 cho:1 chunk:1 mayank:1 destination:1 barbados:1 michael:1 again:2 jinjun:1 huang:1 dr:1 guy:1 zhao:1 michel:1 li:10 coding:3 oregon:1 forsyth:1 explicitly:1 vi:32 stream:17 nisan:1 tion:1 view:2 try:1 analyze:1 traffic:1 red:2 sort:1 capability:1 parallel:1 shai:1 jia:1 ass:1 square:10 accuracy:4 characteristic:1 efficiently:2 largely:1 spaced:1 conducting:1 lesson:1 accurately:1 basically:2 biostat:1 monitoring:1 cc:1 acc:6 ping:9 definition:2 against:2 energy:1 derek:1 obvious:2 naturally:3 proof:1 boil:1 turnstile:2 dataset:4 proved:1 popular:8 recall:2 color:2 dimensionality:1 satisfiability:1 feed:3 hashing:3 dt:2 higher:1 zisserman:2 wei:1 improved:1 bmvc:1 evaluated:1 done:1 mitzenmacher:1 generality:1 furthermore:2 just:1 smola:1 correlation:5 jerome:1 langford:1 sketch:1 web:1 nonlinear:6 rajeev:1 widespread:2 fuxin:1 resemblance:4 facilitate:1 building:1 normalized:1 verify:2 hence:3 analytically:3 regularization:3 symmetric:3 semantic:1 generalized:1 l1:1 image:6 recently:2 common:3 winner:1 million:3 tail:1 slight:1 approximates:1 numerically:1 significant:4 measurement:1 ashwin:1 rd:5 language:2 chapelle:1 stable:32 similarity:27 patrick:1 store:2 certain:1 n00014:1 binary:11 onr:1 chuck:1 exploited:1 dashed:6 ii:2 signal:1 full:2 smooth:3 technical:1 match:1 offer:1 dept:4 retrieval:3 lin:1 coded:1 basic:2 vision:6 metric:2 rutgers:2 arxiv:2 histogram:9 kernel:43 normalization:2 proposal:2 addition:1 crucial:1 ithaca:2 rest:1 unlike:1 nv:1 elegant:1 call:1 practitioner:1 ngo:1 near:4 counting:3 yang:2 iii:2 enough:1 embeddings:1 xj:5 inner:1 idea:3 haffner:1 yihong:1 jianchao:1 york:1 dramatically:1 useful:1 collision:46 generally:1 clear:2 pems:7 netherlands:1 svms:1 reduced:1 generate:2 http:2 outperform:1 bagof:1 nsf:2 moses:1 sign:71 estimated:3 materialize:1 dasgupta:2 express:1 terminology:1 acos:10 changing:1 libsvm:7 utilize:1 merely:3 monotone:2 sum:3 jose:1 soda:1 named:1 chih:1 lake:1 jeh:1 bit:11 bound:24 nan:1 fan:1 nonnegative:6 lall:1 gang:1 alex:1 ri:3 bousquet:1 nathan:1 simulate:1 argument:1 extremely:1 leon:2 expanded:1 pseudorandom:2 charikar:1 developing:1 anirban:1 smaller:2 slightly:2 ur:1 cun:2 making:2 bigdata1419210:1 pr:17 indexing:1 thorsten:1 bucket:2 taken:1 describing:1 precomputed:6 mechanism:2 needed:2 know:2 singer:1 adopted:2 available:3 apply:4 away:1 occurrence:2 attenberg:1 weinberger:1 florida:1 original:5 thomas:2 denotes:1 remaining:1 nlp:2 yoram:1 practicality:1 especially:5 build:2 approximating:2 googleresearch:1 move:1 pingli:1 question:1 occurs:1 already:1 strategy:1 diagonal:2 gradient:1 distance:12 simulated:6 consumption:1 topic:1 manifold:1 cauchy:13 reason:1 assuming:2 length:4 code:1 vladimir:1 sreekanth:1 difficult:5 stoc:3 noam:1 negative:3 anal:1 zt:1 summarization:1 upper:9 observation:4 datasets:5 sharp:6 canada:1 david:2 pair:3 learned:1 nip:2 trans:2 usually:1 pattern:1 challenge:1 including:1 green:1 video:1 ia:4 suitable:4 natural:2 blogspot:1 indicator:1 scheme:2 library:1 jun:2 text:3 nice:1 geometric:2 l2:5 prior:1 acknowledgement:1 multiplication:1 vancouver:1 xiang:1 afosr:1 freund:1 loss:1 expect:1 permutation:1 sublinear:1 interesting:3 srebro:1 lv:1 generator:2 foundation:1 degree:1 storing:3 verma:1 row:1 summary:2 supported:2 free:1 english:2 neighbor:6 wide:1 absolute:1 sparse:21 benefit:1 curve:16 dimension:7 uit:2 world:1 contour:1 crawl:1 commonly:1 corvalis:1 projected:6 san:3 transaction:1 lebanon:1 approximate:5 confirm:5 uai:1 summing:1 pittsburgh:1 assumed:1 conclude:1 consuming:1 francisco:2 shwartz:1 search:8 table:2 promising:1 expanding:1 ca:4 shrivastava:1 improving:1 sminchisescu:1 williamson:1 bottou:2 nakul:1 aistats:1 dense:3 verifies:2 xu:1 en:1 fashion:2 ny:2 vr:1 tong:1 sub:1 explicit:1 ib:4 rk:2 down:1 theorem:5 removing:1 jen:1 sensing:2 r2:1 svm:6 orie:1 normalizing:1 bivariate:1 mnist:9 vapnik:1 hui:2 illustrates:1 rui:1 entropy:1 simply:3 visual:1 josh:1 expressed:1 amsterdam:1 chang:1 hua:1 trillion:1 acm:2 viewed:1 consequently:1 rbf:1 towards:3 owen:1 samorodnitsky:3 typical:2 except:1 reducing:1 lemma:6 total:1 sanjoy:1 pas:1 goemans:1 experimental:2 shannon:1 rit:2 support:1 minwise:1 evaluate:1 |
4,504 | 5,076 | Learning Multi-level Sparse Representations
Ferran Diego
Fred A. Hamprecht
Heidelberg Collaboratory for Image Processing (HCI)
Interdisciplinary Center for Scientific Computing (IWR)
University of Heidelberg, Heidelberg 69115, Germany
{ferran.diego,fred.hamprecht}@iwr.uni-heidelberg.de
Abstract
Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that
ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel
? neuron ? assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix
of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both
hierarchical and heterarchical relations of lower-level to higher-level concepts. In
addition, we learn the nature of these relations rather than imposing them. Finally,
we describe an optimization scheme that allows to optimize the decomposition
over all levels jointly, rather than in a greedy level-by-level fashion.
The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first
formalism that allows to simultaneously interpret a calcium imaging sequence in
terms of the constituent neurons, their membership in assemblies, and the time
courses of both neurons and assemblies. Experiments show that the proposed
model fully recovers the structure from difficult synthetic data designed to imitate
the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data.
1
Introduction
This work was stimulated by a concrete problem, namely the decomposition of state-of-the-art 2D +
time calcium imaging sequences as shown in Fig. 1 into neurons, and assemblies of neurons [20].
Calcium imaging is an increasingly popular tool for unraveling the network structure of local circuits
of the brain [11, 6, 7]. Leveraging sparsity constraints seems natural, given that the neural activations
are sparse in both space and time. The experimentally achievable optical slice thickness still results
in spatial overlap of cells, meaning that each pixel can show intensity from more than one neuron.
In addition, it is anticipated that one neuron can be part of more than one assembly. All neurons of
an assembly are expected to fire at roughly the same time [20].
A standard sparse decomposition of the set of vectorized images into a dictionary and a set of
coefficients would not conform with prior knowledge that we have entities at three levels: the pixels,
the neurons, and the assemblies, see Fig. 2. Also, it would not allow to include structured constraints
[10] in a meaningful way. As a consequence, we propose a multi-level decomposition (Fig. 3) that
?
?
?
?
allows enforcing (structured) sparsity constraints at each level,
admits both hierarchical or heterarchical relations between levels (Fig. 2),
can be learned jointly (section 2 and 2.4), and
yields good results on real-world experimental data (Fig. 2).
1
Figure 1: Left: frames from a calcium imaging sequence showing firing neurons that were recorded
by an epi-fluorescence microscope. Right: two frames from a synthetic sequence. The underlying
biological aim motivating these experiments is to study the role of neuronal assemblies in memory
consolidation.
1.1
Relation to Previous Work
Most important unsupervised data analysis methods such as PCA, NMF / pLSA, ICA, cluster analysis, sparse coding and others can be written in terms of a bilinear decomposition of, or approximation
to, a two-way matrix of raw data [22]. One natural generalization is to perform multilinear decompositions of multi-way arrays [4] using methods such as higher-order SVD [1]. This is not the direction
pursued here, because the image sequence considered does not have a tensorial structure.
On the other hand, there is a relation to (hierarchical) topic models (e.g. [8]). These do not use structured sparsity constraints, but go beyond our approach in automatically estimating the appropriate
number of levels using nonparametric Bayesian models.
Closest to our proposal are four lines of work that we build on: Jenatton et al. [10] introduce structured sparsity constraints that we use to find dictionary basis functions representing single neurons.
The works [9] and [13] enforce hierarchical (tree-structured) sparsity constraints. These authors find
the tree structure using extraneous methods, such as a separate clustering procedure. In contrast, the
method proposed here can infer either hierarchical (tree-structured) or heterarchical (directed acyclic
graph) relations between entities at different levels. Cichocki and Zdunek [3] proposed a multilayer
approach to non-negative matrix factorization. This is a multi-stage procedure which iteratively decomposes the rightmost matrix of the decomposition that was previously found. Similar approaches
are explored in [23], [24]. Finally, Rubinstein et al. [21] proposed a novel dictionary structure
where each basis function in a dictionary is a linear combination of a few elements from a fixed base
dictionary. In contrast to these last two methods, we optimize over all factors (including the base
dictionary) jointly. Note that our semantics of ?bilevel factorization? (section 2.2) are different from
the one in [25].
Notation. A matrix is a set of columns and rows, respectively, X = [x:1 , . . . , x:n ] = [x1: ; . . . ; xm: ].
The zero matrix or P
vector is denoted 0, with dimensions inferred from the context. For any vector
m
x ? Rm , kxk? = ( i=1 |xi |? )1/? is the l? (quasi)-norm of x, and k ? kF is the Frobenius norm.
2
2.1
Learning a Sparse Heterarchical Structure
Dictionary Learning: Single Level Sparse Matrix Factorization
Let X ? Rm?n be a matrix whose n columns represent an m-dimensional observation each. The
T
idea of dictionary learning is to find a decomposition X ? D U0 , see Fig. 3(a). D is called
the dictionary, and its columns hold the basis functions in terms of which the sparse coefficients in
U0 approximate the original observations. The regularization term ?U encourages sparsity of the
coefficient matrix. ?D prevents the inflation of dictionary entries to compensate for small coefficients, and induces, if desired, additional structure on the learned basis functions [16]. Interesting
theoretical results on support recovery, furthered by an elegantly compact formulation and the ready
availability of optimizers [17] have spawned a large number of intriguing and successful applications, e.g. image denoising [19] and detection of unusual events [26]. Dictionary learning is a special
instance of our framework, involving only a single-level decomposition. In the following we first
generalize to two, then to more levels.
2
id assemblies
tim
es
(fra
s)
me
id neuron
a
s (fr
me
s)
me
ti
5
10
15
20
25
30
35
40
45
heterarchical correspondence
Figure 2: Bottom left: Shown are the temporal activation patterns of individual neurons U0 (lower
level), and assemblies of neurons U1 (upper level). Neurons D and assemblies are related by a
bipartite graph A1 the estimation of which is a central goal of this work. The signature of five
neuronal assemblies (five columns of DA1 ) in the spatial domain is shown at the top. The outlines in
the middle of the bottom show the union of all neurons found in D, superimposed onto a maximum
intensity projection across the background-subtracted raw image sequence. The graphs on the right
show a different view on the transients estimated for single neurons, that is, the rows of U0 . The raw
data comes from a mouse hippocampal slice, where single neurons can indeed be part of more than
one assembly [20]. Analogous results on synthetic data are shown in the supplemental material.
a)
b)
c)
d)
Figure 3: Decomposition of X into {1, 2, 3, L + 1} levels, with corresponding equations.
3
2.2
Bilevel Sparse Matrix Factorization
We now come to the heart of this work. To build intuition, we first refer to the application that has
motivated this development, before giving mathematical details. The relation between the symbols
used in the following is sketched in Fig. 3(b), while actual matrix contents are partially visualized
in Fig. 2.
Given is a sequence of n noisy sparse images which we vectorize and collect in the columns of
matrix X. We would like to find the following:
? a dictionary D of q0 vectorized images comprising m pixels each. Ideally, each basis
function should correspond to a single neuron.
? a matrix A1 indicating to what extent each of the q0 neurons is associated with any of the
q1 neuronal assemblies. We will call this matrix interchangeably assignment or adjacency
matrix in the following. It is this matrix which encapsulates the quintessential structure
we extract from the raw data, viz., which lower-level concept is associated with which
higher-level concept.
? a coefficient matrix [U1 ]T that encodes in its rows the temporal evolution (activation) of
the q1 neuronal assemblies across n time steps.
? a coefficient matrix [U0 ]T (shown in the equation, but not in the sketch of Fig. 3(b)) that
encodes in its rows the temporal activation of the q0 neuron basis functions across n time
steps.
The quantities D, A1 , [U0 ], [U1 ] in this redundant representation need to be consistent.
Let us now turn to equations. At first sight, it seems like minimizing kX ? DA1 [U1 ]T k2F over
D, A1 , U1 subject to constraints should do the job. However, this could be too much of a simplification! To illustrate, assume for the moment that only a single neuronal assembly is active at any
given time. Then all neurons associated with that assembly would follow an absolutely identical
time course. While it is expected that neurons from an assembly show similar activation patterns
[20], this is something we want to glean from the data, and not absolutely impose. In response, we
introduce an auxiliary matrix U0 ? U1 [A1 ]T showing the temporal activation pattern of individual
neurons. These two matrices, U0 and U1 , are also shown in the false color plots of the collage of
Fig. 2, bottom left.
The full equation involving coefficient and auxiliary coefficient matrices is shown in Fig. 3(b). The
terms involving X are data fidelity terms, while kU0 ?U1 [A1 ]T k2F enforces consistency. Parameters
? trade off the various terms, and constraints of a different kind can be applied selectively to each
of the matrices that we optimize over. Jointly optimizing over D, A1 , U0 , and U1 is a hard and
non-convex problem that we address using a block coordinate descent strategy described in section
2.4 and supplemental material.
2.3
Trilevel and Multi-level Sparse Matrix Factorization
We now discuss the generalization to an arbitrary number of levels that may be relevant for applications other than calcium imaging. To give a better feeling for the structure of the equations, the
trilevel case is spelled out explicitly in Fig. 3(c), while Fig. 3(d) shows the general case of L + 1
levels.
The most interesting matrices, in many ways, are the assignment matrices A1 , A2 , etc. Assume,
first, that the relations between lower-level and higher-level concepts obey a strict inclusion hierarchy. Such relations can be expressed in terms of a forest of trees: each highest-level concept is
the root of a tree which fans out to all subordinate concepts. Each subordinate concept has a single
parent only. Such a forest can also be seen as a (special case of an L + 1-partite) graph, with an
adjacency matrix Al specifying the parents of each concept at level l ? 1. To impose an inclusion
hierarchy, one can enforce the nestedness condition by requiring that kalk: k0 ? 1.
In general, and in the application considered here, one will not want to impose an inclusion hierarchy. In that case, the relations between concepts can be expressed in terms of a concatenation of
bipartite graphs that conform with a directed acyclic graph. Again, the adjacency matrices encode
the structure of such a directed acyclic graph.
4
In summary, the general equation in Fig. 3(d) is a principled alternative to simpler approaches that
would impose the relations between concepts, or estimate them separately using, for instance, clustering algorithms; and that would then find a sparse factorization subject to this structure. Instead,
we simultaneously estimate the relation between concepts at different levels, as well as find a sparse
approximation to the raw data.
2.4
Optimization
The optimization problem in Fig. 3(d) is not jointly convex, but becomes convex w.r.t. one variable
while keeping the others fixed provided that the norms ?U , ?D , and ?A are also convex. Indeed,
it is possible to define convex norms that not only induce sparse solutions, but also favor non-zero
patterns of a specific structure, such as sets of variables in a convex polygon with certain symmetry
constraints [10]. Following [5], we use such norms to bias towards neuron basis functions holding a
single neuron only. We employ a block coordinate descent strategy [2, Section 2.7] that iteratively
optimizes one group of variables while fixing all others. Due to space limitations, the details and
implementation of the optimization are described in the supplemental material.
3
3.1
Methods
Decomposition into neurons and their transients only
Cell Sorting [18] and Adina [5] focus only on the detection of cell centroids and of cell shape,
and the estimation and analysis of Calcium transient signals. However, these methods provide no
means to detect and identify neuronal co-activation. The key idea is to decompose calcium imaging
data into constituent signal sources, i.e. temporal and spatial components. Cell sorting combines
principal component analysis (PCA) and independent component analysis (ICA). In contrast, Adina
relies on a matrix factorization based on sparse coding and dictionary learning [15], exploiting that
neuronal activity is sparsely distributed in both space and time. Both methods are combined with a
subsequent image segmentation since the spatial components (basis functions) often contain more
than one neuron. Without such a segmentation step, overlapping cells or those with highly correlated
activity are often associated with the same basis function.
3.2
Decomposition into neurons, their transients, and assemblies of neurons
MNNMF+Adina Here, we combine a multilayer extension of non-negative matrix factorization
with the segmentation from Adina. MNNMF [3] is a multi-stage procedure that iteratively decomposes the rightmost matrix of the decomposition that was previously found. In the first stage, we
decompose the calcium imaging data into spatial and temporal components, just like the methods
cited above, but using NMF and a non-negative least squares loss function [12] as implemented in
[14]. We then use the segmentation from [5] to obtain single neurons in an updated dictionary1
D. Given this purged dictionary, the temporal components U0 are updated under the NMF criterion. Next, the temporal components U0 are further decomposed into two low-rank matrices,
U0 ? U1 [A1 ]T , again using NMF. Altogether, this procedure allows identifying neuronal assemblies and their temporal evolution. However, the exact number of assemblies q1 must be defined a
priori.
KSVDS+Adina allows estimating a sparse decomposition [21] X ? DA1 [U1 ]T provided that
i) a dictionary of basis functions and ii) the exact number of assemblies is supplied as input. In
addition, the assignment matrix A1 is typically dense and needs to be thresholded. We obtain good
results when supplying the purged dictionary1 of single neurons resulting from Adina [5].
SHMF ? Sparse Heterarchical Matrix Factorization in its bilevel formulation decomposes the
raw data simultaneously into neuron basis functions D, a mapping of these to assemblies A1 , as
well as time courses of neurons U0 and assemblies U1 , see equation in Fig. 3. Sparsity is induced
by setting ?U and ?A to the l1 -norm. In addition, we impose the l2 -norm at the assembly level ?1D ,
1
Without such a segmentation step, the dictionary atoms often comprise more than one neuron, and overall
results (not shown) are poor.
5
and let ?D be the structured sparsity-inducing norm proposed by Jenatton et al. [10]. In contrast to
all other approaches described above, this already suffices to produce basis functions that contain,
in most cases, only single neurons. Exceptions arise only in the case of cells which both overlap in
space and have high temporal correlation. For this reason, and for a fair comparison with the other
methods, we again use the segmentation from [5]. For the optimization, D and U0 are initialized
with the results from Adina. U1 is initialized randomly with positive-truncated Gaussian noise,
and A1 by the identity matrix as in KSVDS [21]. Finally, the number of neurons q0 and neuronal
assemblies q1 are set to generous upper bounds of the expected true numbers, and are both set to
equal values (here: q0 = q1 = 60) for simplicity. Note that a precise specification as for the above
methods is not required.
4
Results
To obtain quantitative results, we first evaluate the proposed methods on synthetic image sequences
designed so as to exhibit similar characteristics as the real data. We also report a qualitative analysis
of the performance on real data from [20]. Since neuronal assemblies are still the subject of ongoing
research, ground truth is not available for such real-world data.
4.1
Artifical Sequences
For evaluation, we created 80 synthetic sequences with 450 frames of size 128 ? 128 pixels with a
frame rate of 30f ps. The data is created by randomly selecting cell shapes from 36 different active
cells extracted from real data, and locating them in different locations with an overlap of up to 30%.
Each cell is randomly assigned to up to three out of a total of five assemblies. Each assembly fires
according to a dependent Poisson process, with transient shapes following a one-sided exponential
decay with a scale of 500 to 800ms that is convolved by a Gaussian kernel with ? = 50ms. The
dependency is induced by eliminating all transients that overlap by more than 20%. Within such a
transient, the neurons associated with the assembly fire with a probability of 90% each. The number
of cells per assembly varies from 1 to 10, and we use five assemblies in all experiments. Finally,
the synthetic movies are distorted by white Gaussian noise with a relative amplitude, (max. intensity
? mean intensity)/?noise ? {3, 5, 7, 10, 12, 15, 17, 20}. By construction, the identity, location and
activity patterns of all cells along with their membership in assemblies are known. The supplemental
material shows one example, and two frames are shown in Fig. 1.
Identificaton of assemblies First, we want to quantify the ability to correctly infer assemblies
from an image sequence. To that end, we compute the graph edit distance of the estimated assignments of neurons to assemblies, encoded in matrices A1 , to the known ground truth. We count the
number of false positive and false negative edges in the assignment graphs, where vertices (assemblies) are matched by minimizing the Hamming distance between binarized assignment matrices
over all permutations.
Remember that MNNMF+Adina and KSVDS+Adina require a specification of the precise number
of assemblies, which is unknown for real data. Accordingly, adjacency matrices, A1 ? Rq0 ?q1 for
different values for the number of assemblies q1 ? [3, 7] were estimated. Bilevel SHMF only needs
an upper bound on the number of assemblies. Its performance is independent of the precise value,
but computational cost increases with the bound. In these experiments, q1 was set to 60.
Fig. 4 shows that all methods from section 3.2 give respectable performance in the task of inferring
neuronal assemblies from nontrivial synthetic image sequences. For the true number of assemblies
(q1 = 5), Bilevel SHMF reaches a higher sensitivity than the alternative methods, with a median
difference of 14%. According to the quartiles, the precisions achieved are broadly comparable, with
MNNMF+Adina reaching the highest value.
All methods from section 3.2 also infer the temporal activity of all assemblies, U1 . We omit a
comparison of these matrices for lack of a good metric that would also take into account the correctness of the assemblies themselves: a fine time course has little worth if its associated assembly is
deficient, for instance by having lost some neurons with respect to ground truth.
6
Sensitivity
Precision
Figure 4: Performance on learning correct assignments of neurons to assemblies from nontrivial
synthetic data with ground truth. KSVDS+Adina and MNNMF+Adina require that the number of
assemblies q1 be fixed in advance. In contrast, bilevel SHMF estimates the number of assemblies
given an upper-bound. Its performance is hence shown as a constant over the q1 -axis. Plots show
the median as well as the band between the lower and the upper quartile for all 80 sequences. Colors
at non-integer q1 -values are a guide to the eye.
Detection of calcium transients While the detection of assemblies as evaluated above is completely new in the literature, we now turn to a better studied [18, 5] problem: the detection of
calcium transients of individual neurons. Some estimates for these characteristic waveforms are
also shown, for real-world data, on the right hand side of Fig. 2.
To quantify transient detection performance, we compute the sensitivity and precision as in [20].
Here, sensitivity is the ratio of correctly detected to all neuronal activities; and precision is the ratio
of correctly detected to all detected neuronal activities. Results are shown in Fig. 5.
Figure 5: Sensitivity and precision of transient detection for individual neurons. Methods that
estimate both assemblies and neuron transients perform at least as well as their simpler counterparts
that focus on the latter.
Perhaps surprisingly, the methods from section 3.2 (MNNMF+Adina and Bilevel SHMF2 ) fare at
least as well as those from section 3.1 (CellSorting and Adina). This is not self-evident, because a
bilevel factorization could be expected to be more ill-posed than a single level factorization.
We make two observations: Firstly, it seems that using a bilevel representation with suitable regularization constraints helps stabilize the activity estimates also for single neurons. Secondly, the higher
sensitivity and similar precision of bilevel SHMF compared to MNNMF+Adina suggest that a joint
estimation of neurons, assemblies and their temporal activities as described in section 2 increases the
robustness, and compensates errors that may not be corrected in greedy level-per-level estimation.
Incidentally, the great spread of both sensitivities and precisions results from the great variety of
noise levels used in the simulations, and attests to the difficulty of part of the synthetic data sets.
2
KSVDS is not evaluated here because it does not yield activity estimates for individual neurons.
7
Raw data
Cell Sorting [18]
Adina [5]
Neurons
(D[U0 ]T )
Assemblies
(DA1 [U1 ]T )
Figure 6: Three examples of raw data and reconstructed images of the times indicated in Fig. 2. The
other examples are shown in the supplemental material.
4.2
Real Sequences
We have applied bilevel SHMF to epifluorescent data sets from mice (C57BL6) hippocampal slice
cultures. As shown in Fig. 2, the method is able to distinguish overlapping cells and highly correlated
cells, while at the same time estimating neuronal co-activation patterns (assemblies). Exploiting
spatio-temporal sparsity and convex cell shape priors allows to accurately infer the transient events.
5
Discussion
The proposed multi-level sparse factorization essentially combines a clustering of concepts across
several levels (expressed by the assignment matrices) with the finding of a basis dictionary, shared
by concepts at all levels, and the finding of coefficient matrices for different levels. The formalism
allows imposing different regularizers at different levels. Users need to choose tradeoff parameters
?, ? that indirectly determine the number of concepts (clusters) found at each level, and the sparsity.
The ranks ql , on the other hand, are less important: Figure 2 shows that the ranks of estimated
matrices can be lower than their nominal dimensionality: superfluous degrees of freedom are simply
not used.
On the application side, the proposed method allows to accomplish the detection of neurons, assemblies and their relation in a single framework, exploiting sparseness in the temporal and spatial
domain in the process. Bilevel SHMF in particular is able to detect automatically, and differentiate between, overlapping and highly correlated cells, and to estimate the underlying co-activation
patterns. As shown in Fig. 6, this approach is able to reconstruct the raw data at both levels of
representations, and to make plausible proposals for neuron and assembly identification.
Given the experimental importance of calcium imaging, automated methods in the spirit of the
one described here can be expected to become an essential tool for the investigation of complex
activation patterns in live neural tissue.
Acknowledgement
We are very grateful for partial financial support by CellNetworks Cluster (EXC81). We also thank
Susanne Reichinnek, Martin Both and Andreas Draguhn for their comments on the manuscript.
8
References
[1] G. Bergqvist and E. G. Larsson. The Higher-Order Singular Value Decomposition Theory and an Application. IEEE Signal Processing Magazine, 27(3):151?154, 2010.
[2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[3] A. Cichocki and R. Zdunek. Multilayer nonnegative matrix factorization. Electronics Letters, 42:947?
948, 2006.
[4] A. Cichocki, R. Zdunek, A. H. Phan, and S. Amari. Nonnegative Matrix and Tensor Factorizations Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, 2009.
[5] F. Diego, S. Reichinnek, M. Both, and F. A. Hamprecht. Automated identification of neuronal activity
from calcium imaging by sparse dictionary learning. In International Symposium on Biomedical Imaging,
in press, 2013.
[6] W. Goebel and F. Helmchen. In vivo calcium imaging of neural network function. Physiology, 2007.
[7] C. Grienberger and A. Konnerth. Imaging calcium in neurons. Neuron, 2011.
[8] Q. Ho, J. Eisenstein, and E. P. Xing. Document hierarchies from text and links. In Proc. of the 21st Int.
World Wide Web Conference (WWW 2012), pages 739?748. ACM, 2012.
[9] R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multi-scale Mining
of fMRI data with Hierarchical Structured Sparsity. SIAM Journal on Imaging Sciences, 5(3), 2012.
[10] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In International
Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[11] J. Kerr and W. Denk. Imaging in vivo: watching the brain in action. Nature Review Neuroscience, 2008.
[12] H. Kim and H. Park. Nonnegative matrix factorization based on alternating nonnegativity constrained
least squares and active set method. SIAM J. on Matrix Analysis and Applications, 2008.
[13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-response regression with structured sparsity,
with an application to eQTL mapping. Ann. Appl. Stat., 2012.
[14] Y. Li and A. Ngom. The non-negative matrix factorization toolbox for biological data mining. In BMC
Source Code for Biology and Medicine, 2013.
[15] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proceedings
of the 26th Annual International Conference on Machine Learning, 2009.
[16] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding.
Journal of Machine Learning Research, 2010.
[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and R. Jenatton. Sparse modeling software. http://spamsdevel.gforge.inria.fr/.
[18] E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer. Automated analysis of cellular signals from largescale calcium imaging data. Neuron, 2009.
[19] M. Protter and M. Elad. Image sequence denoising via sparse and redundant representations. IEEE
Transactions on Image Processing, 18(1), 2009.
[20] S. Reichinnek, A. von Kameke, A. M. Hagenston, E. Freitag, F. C. Roth, H. Bading, M. T. Hasan,
A. Draguhn, and M. Both. Reliable optical detection of coherent neuronal activity in fast oscillating
networks in vitro. NeuroImage, 60(1), 2012.
[21] R. Rubinstein, M. Zibulevsky, and M. Elad. Double sparsity: Learning sparse dictionaries for sparse
signal approximation. IEEE Transactions on Signal Processing, 2010.
[22] A. P. Singh and G. J. Gordon. A unified view of matrix factorization models. ECML PKDD, 2008.
[23] M. Sun and H. Van Hamme. A two-layer non-negative matrix factorization model for vocabulary discovery. In Symposium on machine learning in speech and language processing, 2011.
[24] Q. Sun, P. Wu, Y. Wu, M. Guo, and J. Lu. Unsupervised multi-level non-negative matrix factorization
model: Binary data case. Journal of Information Security, 2012.
[25] J. Yang, Z. Wang, Z. Lin, X. Shu, and T. S. Huang. Bilevel sparse coding for coupled feature spaces. In
CVPR?12, pages 2360?2367. IEEE, 2012.
[26] B. Zhao, L. Fei-Fei, and E. P. Xing. Online detection of unusual events in videos via dynamic sparse
coding. In The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition, Colorado
Springs, CO, June 2011.
9
| 5076 |@word middle:1 eliminating:1 achievable:1 seems:3 norm:8 plsa:1 tensorial:1 simulation:1 decomposition:16 q1:12 schnitzer:1 moment:1 electronics:1 selecting:1 document:1 rightmost:2 activation:10 intriguing:1 written:1 must:1 subsequent:1 shape:4 designed:2 plot:2 greedy:2 pursued:1 intelligence:1 imitate:1 accordingly:1 supplying:1 location:2 firstly:1 simpler:2 five:4 mathematical:1 along:1 become:1 symposium:2 qualitative:1 freitag:1 hci:1 combine:3 introduce:2 ica:2 indeed:2 expected:5 themselves:1 pkdd:1 roughly:1 multi:11 brain:2 decreasing:1 decomposed:1 automatically:2 actual:1 little:1 becomes:1 provided:2 estimating:3 notation:1 underlying:2 circuit:1 matched:1 what:1 kind:1 supplemental:5 unified:1 finding:2 grienberger:1 ought:1 temporal:14 quantitative:1 remember:1 binarized:1 ti:1 sapiro:3 rm:2 omit:1 bertsekas:1 before:1 positive:2 local:1 consequence:1 bilinear:2 id:2 firing:1 inria:1 studied:1 collect:1 specifying:1 appl:1 co:4 factorization:21 directed:3 enforces:1 union:1 block:2 lost:1 optimizers:1 procedure:4 physiology:1 projection:1 induce:1 suggest:1 onto:1 context:1 live:1 optimize:3 www:1 center:1 nestedness:1 roth:1 go:1 convex:7 simplicity:1 recovery:1 identifying:1 array:1 importantly:1 financial:1 exploratory:1 coordinate:2 analogous:1 updated:2 diego:3 hierarchy:5 construction:1 user:1 exact:2 nominal:1 magazine:1 programming:1 colorado:1 element:1 recognition:1 sparsely:1 bottom:3 role:1 wang:1 sun:2 trade:1 highest:2 zibulevsky:1 principled:1 intuition:1 ideally:1 denk:1 dynamic:1 signature:1 grateful:1 singh:1 bipartite:2 basis:13 completely:1 joint:1 k0:1 various:1 polygon:1 epi:1 fast:1 describe:1 detected:3 rubinstein:2 artificial:1 whose:1 encoded:1 posed:1 plausible:2 elad:2 cvpr:1 reconstruct:1 amari:1 compensates:1 favor:1 ability:1 statistic:1 jointly:5 noisy:1 online:3 differentiate:1 sequence:16 propose:2 product:1 fr:2 relevant:1 eger:1 frobenius:1 inducing:1 constituent:2 exploiting:3 parent:2 cluster:3 p:1 double:1 produce:1 incidentally:1 oscillating:1 spelled:1 tim:1 illustrate:1 help:1 stat:1 fixing:1 job:1 auxiliary:2 implemented:1 come:2 quantify:2 direction:1 waveform:1 guided:1 correct:1 quartile:2 adina:16 transient:13 material:5 adjacency:4 subordinate:2 require:2 suffices:1 generalization:2 decompose:2 investigation:1 biological:2 multilinear:1 secondly:1 extension:1 hold:1 inflation:1 considered:3 ground:4 great:2 mapping:2 dictionary:20 generous:1 a2:1 estimation:4 proc:1 eqtl:1 fluorescence:1 edit:1 correctness:1 helmchen:1 ferran:2 tool:2 gaussian:3 sight:1 aim:1 attests:1 rather:2 collaboratory:1 reaching:1 encode:1 focus:2 viz:1 ponce:3 june:1 rank:4 superimposed:1 contrast:6 centroid:1 kim:2 detect:2 dependent:1 membership:2 typically:1 relation:13 quasi:1 comprising:1 germany:1 semantics:1 pixel:5 sketched:1 fidelity:1 ill:1 overall:1 denoted:1 extraneous:1 priori:1 development:1 art:1 spatial:6 special:2 gramfort:1 constrained:1 equal:1 comprise:1 having:1 atom:1 identical:1 bmc:1 park:1 biology:1 unsupervised:5 k2f:2 anticipated:1 fmri:1 others:3 report:1 gordon:1 few:1 employ:1 randomly:3 simultaneously:3 individual:5 fire:3 freedom:1 detection:10 highly:3 mining:2 evaluation:1 hamprecht:3 superfluous:1 regularizers:1 konnerth:1 edge:1 partial:1 culture:1 tree:6 initialized:2 desired:1 theoretical:1 instance:3 formalism:2 column:5 modeling:1 respectable:1 assignment:8 cost:1 vertex:1 entry:1 successful:1 too:1 motivating:1 dependency:1 thickness:1 varies:1 accomplish:1 synthetic:9 combined:1 st:1 cited:1 international:3 sensitivity:7 siam:2 interdisciplinary:1 off:1 mouse:2 concrete:2 again:3 central:1 recorded:1 von:1 choose:1 huang:1 watching:1 zhao:1 michel:1 li:1 account:1 de:1 coding:6 stabilize:1 availability:1 coefficient:9 int:1 explicitly:1 blind:1 view:2 root:1 xing:3 vivo:2 hamme:1 square:2 partite:1 characteristic:2 yield:3 correspond:1 identify:1 generalize:1 raw:9 bayesian:1 identification:2 accurately:1 lu:1 worth:1 tissue:1 reach:1 associated:6 recovers:1 hamming:1 popular:1 knowledge:1 color:2 dimensionality:1 segmentation:6 amplitude:1 jenatton:5 manuscript:1 higher:8 follow:1 reflected:1 response:2 formulation:2 evaluated:2 just:1 stage:3 biomedical:1 correlation:1 hand:3 sketch:1 web:1 nonlinear:1 overlapping:3 lack:1 perhaps:1 indicated:1 scientific:2 concept:16 requiring:1 contain:2 counterpart:2 evolution:2 regularization:2 true:2 assigned:1 hence:1 q0:5 iteratively:3 alternating:1 semantic:1 white:1 interchangeably:1 self:1 encourages:1 eisenstein:1 criterion:1 m:2 hippocampal:2 nimmerjahn:1 outline:1 evident:1 l1:1 image:15 meaning:1 novel:1 vitro:1 interpretation:1 fare:1 interpret:1 refer:1 goebel:1 imposing:2 consistency:1 inclusion:3 language:1 specification:2 etc:1 base:2 something:1 closest:1 larsson:1 optimizing:1 optimizes:1 driven:1 certain:1 binary:1 seen:1 additional:1 impose:5 determine:1 paradigm:1 redundant:2 signal:6 u0:15 ii:1 full:1 infer:4 bach:5 compensate:1 lin:1 a1:14 involving:3 regression:1 multilayer:3 essentially:1 metric:1 poisson:1 vision:1 represent:1 kernel:1 achieved:1 cell:17 microscope:1 proposal:2 addition:4 background:1 want:3 separately:1 fine:1 gforge:1 median:2 source:3 singular:1 hasan:1 strict:1 comment:1 subject:3 induced:2 deficient:1 leveraging:1 spirit:1 call:1 integer:1 yang:1 automated:3 variety:1 lasso:1 andreas:1 idea:2 tradeoff:1 motivated:1 pca:2 locating:1 speech:1 action:1 spawned:1 nonparametric:1 band:1 induces:1 visualized:1 http:1 supplied:1 neuroscience:2 estimated:4 per:2 glean:1 correctly:3 broadly:1 conform:2 group:2 key:1 four:1 thresholded:1 imaging:16 graph:9 letter:1 powerful:1 fourth:1 distorted:1 wu:2 separation:1 comparable:1 bound:4 layer:1 simplification:1 distinguish:1 correspondence:1 fan:1 nonnegative:3 activity:11 nontrivial:2 annual:1 bilevel:15 constraint:10 fei:2 software:1 encodes:2 u1:15 spring:1 optical:2 martin:1 structured:10 according:2 combination:1 poor:1 across:4 increasingly:1 encapsulates:1 sided:1 heart:1 equation:7 previously:2 turn:2 discus:1 count:1 thirion:1 draguhn:2 kerr:1 end:1 unusual:2 available:1 obey:1 hierarchical:6 appropriate:1 enforce:2 indirectly:1 subtracted:1 alternative:2 robustness:1 ho:1 altogether:1 convolved:1 original:1 top:1 clustering:3 include:1 assembly:55 medicine:1 giving:1 build:2 tensor:1 already:1 quantity:1 strategy:2 furthered:1 unraveling:1 exhibit:1 distance:2 separate:1 thank:1 link:1 entity:2 concatenation:1 athena:1 me:3 topic:1 vectorize:1 extent:1 cellular:1 reason:1 enforcing:1 code:1 ratio:2 minimizing:2 difficult:1 ql:1 holding:1 shu:1 negative:7 susanne:1 implementation:1 calcium:16 unknown:1 perform:2 twenty:1 upper:5 neuron:53 observation:4 descent:2 ecml:1 truncated:1 precise:3 frame:5 arbitrary:1 intensity:4 nmf:4 inferred:1 purged:2 namely:1 required:1 toolbox:1 security:1 coherent:1 learned:2 address:1 beyond:1 able:3 pattern:9 xm:1 ku0:1 sparsity:13 including:1 memory:1 max:1 reliable:1 video:1 overlap:4 event:3 natural:3 suitable:1 difficulty:1 largescale:1 representing:1 scheme:1 movie:1 eye:1 axis:1 created:2 ready:1 cichocki:3 extract:1 coupled:1 text:1 prior:3 literature:1 l2:1 acknowledgement:1 kf:1 review:1 discovery:1 relative:1 protter:1 fully:1 loss:1 permutation:1 interesting:2 limitation:1 acyclic:3 degree:1 vectorized:2 consistent:1 row:4 course:4 summary:1 consolidation:1 surprisingly:1 last:1 keeping:1 bias:1 allow:2 guide:1 side:2 wide:1 reichinnek:3 sparse:29 distributed:1 slice:3 van:1 dimension:1 vocabulary:1 fred:2 world:5 author:1 feeling:1 transaction:2 reconstructed:1 approximate:1 compact:1 uni:1 active:3 mairal:3 spatio:1 xi:1 quintessential:1 decomposes:3 stimulated:1 learn:1 nature:2 symmetry:1 forest:2 heidelberg:4 complex:1 elegantly:1 domain:2 aistats:1 dense:1 spread:1 noise:4 arise:1 fair:1 x1:1 neuronal:16 fig:23 fashion:1 wiley:1 precision:7 neuroimage:1 inferring:1 nonnegativity:1 exponential:1 collage:1 rq0:1 fra:1 specific:1 showing:2 zdunek:3 explored:1 symbol:1 admits:1 decay:1 essential:1 false:3 importance:1 iwr:2 mukamel:1 sparseness:1 kx:1 sorting:3 phan:1 da1:4 simply:1 prevents:1 kxk:1 expressed:3 partially:1 truth:4 relies:1 extracted:1 acm:1 obozinski:2 goal:1 identity:2 ann:1 towards:1 shared:1 content:1 experimentally:1 hard:1 corrected:1 denoising:2 principal:2 called:1 total:1 experimental:3 svd:1 e:1 meaningful:1 indicating:1 selectively:1 exception:1 support:2 guo:1 latter:1 absolutely:2 ongoing:1 artifical:1 evaluate:1 correlated:3 |
4,505 | 5,077 | A New Convex Relaxation for Tensor Completion
Bernardino Romera-Paredes
Department of Computer Science
and UCL Interactive Centre
University College London
Malet Place, London WC1E 6BT, UK
[email protected]
Massimiliano Pontil
Department of Computer Science and
Centre for Computational Statistics
and Machine Learning
University College London
Malet Place, London WC1E 6BT, UK
[email protected]
Abstract
We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this
approach and propose an alternative convex relaxation on the Euclidean ball. We
then describe a technique to solve the associated regularization problem, which
builds upon the alternating direction method of multipliers. Experiments on one
synthetic dataset and two real datasets indicate that the proposed method improves
significantly over tensor trace norm regularization in terms of estimation error,
while remaining computationally tractable.
1
Introduction
During the recent years, there has been a growing interest on the problem of learning a tensor from
a set of linear measurements, such as a subset of its entries, see [9, 17, 22, 23, 25, 26, 27] and
references therein. This methodology, which is also referred to as tensor completion, has been
applied to various fields, ranging from collaborative filtering [15], to computer vision [17], and
medical imaging [9], among others. In this paper, we propose a new method to tensor completion,
which is based on a convex regularizer which encourages low rank tensors and develop an algorithm
for solving the associated regularization problem.
Arguably the most widely used convex approach to tensor completion is based upon the extension
of trace norm regularization [24] to that context. This involves computing the average of the trace
norm of each matricization of the tensor [16]. A key insight behind using trace norm regularization
for matrix completion is that this norm provides a tight convex relaxation of the rank of a matrix
defined on the spectral unit ball [8]. Unfortunately, the extension of this methodology to the more
general tensor setting presents some difficulties. In particular, we shall prove in this paper that the
tensor trace norm is not a tight convex relaxation of the tensor rank.
The above negative result stems from the fact that the spectral norm, used to compute the convex
relaxation for the trace norm, is not an invariant property of the matricization of a tensor. This
observation leads us to take a different route and study afresh the convex relaxation of tensor rank on
the Euclidean ball. We show that this relaxation is tighter than the tensor trace norm, and we describe
a technique to solve the associated regularization problem. This method builds upon the alternating
direction method of multipliers and a subgradient method to compute the proximity operator of the
proposed regularizer. Furthermore, we present numerical experiments on one synthetic dataset and
two real-life datasets, which indicate that the proposed method improves significantly over tensor
trace norm regularization in terms of estimation error, while remaining computationally tractable.
1
The paper is organized in the following manner. In Section 2, we describe the tensor completion
framework. In Section 3, we highlight some limitations of the tensor trace norm regularizer and
present an alternative convex relaxation for the tensor rank. In Section 4, we describe a method to
solve the associated regularization problem. In Section 5, we report on our numerical experience
with the proposed method. Finally, in Section 6, we summarize the main contributions of this paper
and discuss future directions of research.
2
Preliminaries
In this section, we begin by introducing some notation and then proceed to describe the learning
problem. We denote by N the set of natural numbers and, for every k ? N, we define [k] =
{1, . . . , k}. Let N ? N and let1 p1 , . . . , pN ? 2. An N -order tensor W ? Rp1 ?????pN , is a
collection of real numbers (Wi1 ,...,iN : in ? [pn ], n ? [N ]). Boldface Euler scripts, e.g. W, will be
used to denote tensors of order higher than two. Vectors are 1-order tensors and will be denoted by
lower case letters, e.g. x or a; matrices are 2-order tensors and will be denoted by upper case letters,
e.g. W . If x ? Rd then for every r ? s ? d, we define xr:s := (xi : r ? i ? s). We also use the
notation pmin = min{p1 , . . . , pN } and pmax = max{p1 , . . . , pN }.
A mode-n fiber of a tensor W is a vector composed of the elements of W obtained by fixing all
indices but one, corresponding to the n-th mode. This notion is a higher order analogue of columns
(mode-1 fibers) and rows (mode-2 fibers) for matrices. The mode-n matricization (or unfolding) of
W, denoted by W(n) , is a matrix obtained by arranging the mode-n fibers of W so that each of
Q
them is a column of W(n) ? Rpn ?Jn , where Jn := k6=n pk . Note that the ordering of the columns
is not important as long as it is used consistently.
We are now ready to describe the learning problem. We choose a linear operator I : Rp1 ?????pN ?
Rm , representing a set of linear measurements obtained from a target tensor W 0 as y = I(W 0 )+?,
where ? is some disturbance noise. Tensor completion is an important example of this setting, in
this case the operator I returns the known elements of the tensor. That is, we have I(W 0 ) =
(W 0i1 (j),...,iN (j) : j ? [m]), where, for every j ? [m] and n ? [N ], the index in (j) is a prescribed
integer in the set [pn ]. Our aim is to recover the tensor W 0 from the data (I, y). To this end, we
solve the regularization problem
(1)
min ky ? I(W)k22 + ?R(W) : W ? Rp1 ?????pN
where ? is a positive parameter which may be chosen by cross validation. The role of the regularizer
R is to encourage solutions W which have a simple structure in the sense that they involve a small
number of ?degrees of freedom?. A natural choice is to consider the average of the rank of the
tensor?s matricizations. Specifically, we consider the combinatorial regularizer
N
1 X
R(W) =
rank(W(n) ).
(2)
N n=1
Finding a convex relaxation of this regularizer has been the subject of recent works [9, 17, 23]. They
all agree to use the sum of nuclear norms as a convex proxy of R. This is defined as the average of
the trace norm of each matricization of W, that is,
N
1 X
kWktr =
kW(n) ktr
(3)
N n=1
where kW(n) ktr is the trace (or nuclear) norm of matrix W(n) , namely the ?1 -norm of the vector of
singular values of matrix W(n) (see, e.g. [14]). Note that in the particular case of 2-order tensors,
functions (2) and (3) coincide with the usual notion of rank and trace norm of a matrix, respectively.
A rational behind the regularizer (3) is that the trace norm is the tightest convex lower bound to the
rank of a matrix on the spectral unit ball, see [8, Thm. 1]. This lower bound is given by the convex
envelope of the function
rank(W ), if kW k? ? 1
(4)
?(W ) =
+?,
otherwise
1
For simplicity we assume that pn ? 2 for every n ? [N ], otherwise we simply reduce the order of the
tensor without loss of information.
2
where k ? k? is the spectral norm, namely the largest singular value of W . The convex envelope can
be derived by computing the double conjugate of ?. This is defined as
(5)
??? (W ) = sup hW, Si ? ?? (W ) : S ? Rp1 ?p2
where ?? is the conjugate of ?, namely ?? (S) = sup {hW, Si ? ?(W ) : W ? Rp1 ?p2 }.
Note that ? is a spectral function, that is, ?(W ) = ?(?(W )) where ? : Rd+ ? R denotes the
associated symmetric gauge function. Using von Neumann?s trace theorem (see e.g. [14]) it is
easily seen that ?? (S) is also a spectral function. That is, ?? (S) = ? ? (?(S)), where
? ? (?) = sup h?, wi ? ?(w) : w ? Rd+ , with d := min(p1 , p2 ).
We refer to [8] for a detailed discussion of these ideas. We will use this equivalence between spectral
and gauge functions repeatedly in the paper.
3
Alternative Convex Relaxation
In this section, we show that the tensor trace norm is not a tight convex relaxation of the tensor rank
R in equation (2). We then propose an alternative convex relaxation for this function.
Note that due to the composite nature of the function R, computing its convex envelope is a challenging task and one needs to resort to approximations. In [22], the authors note that the tensor trace
norm k ? ktr in equation (3) is a convex lower bound to R on the set
G? := W ? Rp1 ?????pN :
W(n)
? ? 1, ?n ? [N ] .
The key insight behind this observation is summarized in Lemma 4, which we report in Appendix A.
However, the authors of [22] leave open the question of whether the tensor trace norm is the convex
envelope of R on the set G? . In the following, we will prove that this question has a negative answer
by showing that there exists a convex function ? 6= k ? ktr which underestimates the function R on
G? and such that for some tensor W ? G? it holds that ?(W) > kWktr .
To describe our observation we introduce the set
G2 := W ? Rp1 ?...?pN : kWk2 ? 1
where k ? k2 is the Euclidean norm for tensors, that is,
kWk22
:=
p1
X
i1 =1
???
pN
X
(Wi1 ,...,iN )2 .
iN =1
We will choose
?(W) = ?? (W) :=
N
1 X ??
?
? W(n)
N n=1 ?
(6)
where ???? is the ?
convex envelope of the cardinality of a vector on the ?2 -ball of radius ? and we
will choose ? = pmin . Note, by Lemma 4 stated in Appendix A, that for every ? > 0, function
?? is a convex lower bound of function R on the set ?G2 .
Below, for every vector s ? Rd we denote by s? the vector obtained by reordering the components
of s so that they are non increasing in absolute value, that is, |s?1 | ? ? ? ? ? |s?d |.
Lemma 1. Let ???? be the convex envelope of the cardinality on the ?2 -ball of radius ?. Then, for
every x ? Rd such that kxk2 = ?, it holds that ???? (x) = card (x).
This lemma is proved in Appendix B. The function ???? resembles the norm developed in [1], which
corresponds to the convex envelope of the indicator function of the cardinality of a vector in the ?2
ball. The extension of its application to tensors is not straighforward though, as it is required to
specify beforehand the rank of each matricization.
The next lemma provides, together with Lemma 1, a sufficient condition for the existence of a tensor
W ? G? at which the regularizer in equation (6) is strictly larger than the tensor trace norm.
3
Lemma 2. If N ? 3 and p1 , . . . , pN are?not all equal to each other, then there exists
W ? Rp1 ?????pN such that: (a) kWk2 = pmin , (b) W ? G? , (c) min rank(W(n) ) <
n?[N ]
max rank(W(n) ).
n?[N ]
The proof of this lemma is presented in Appendix C. We are now ready to formulate the main result
of this section.
Proposition 3. Let p1 , . . . , pN ? N, let k ? ktr
? be the tensor trace norm in equation (3) and let
?? be the function in equation (6) for ? = pmin . If pmin < pmax , then there are infinitely
many tensors W ? G? such that ?? (W) > kWktr . Moreover, for every W ? G2 , it holds that
?1 (W) ? kWktr .
Proof. By construction ?? (W) ? R(W) for every W ? ?G2 . Since G? ? ?G2 then ?? is a
convex lower bound for the tensor rank R on the set G? as well. The first claim now follows by
Lemmas 1 and 2. Indeed, all tensors obtained following the process described in the proof of Lemma
2 (in Appendix C) have the property that
N
q
1 X
1
2
pmin (N ? 1) + pmin + pmin
kWktr =
k?(W(n) )k1 =
N n=1
N
1
(pmin (N ? 1) + pmin + 1) = ?(W) = R(W).
N
<
Furthermore there are infinitely many such tensors which satisfy this claim (see Appendix C).
With respect to the second claim, given that ?1?? is the convex envelope of the cardinality card on
the Euclidean unit ball, then ?1?? (?) ? k?k1 for every vector ? such that k?k2 ? 1. Consequently,
?1 (W) =
1
N
PN
n=1
?1?? ? W(n)
?
1
N
PN
n=1
k?(W(n) )k1 = kWktr .
The above result stems from the fact that the spectral norm is not an invariant property of the matricization of a tensor, whereas the Euclidean (Frobenius) norm is. This observation leads us to further
study the function ?? .
4
Optimization Method
In this section, we explain how to solve the regularization problem associated with the regularizer
(6). For this purpose, we first recall the alternating direction method of multipliers (ADMM) [4],
which was conveniently applied to tensor trace norm regularization in [9, 22].
4.1
Alternating Direction Method of Multipliers (ADMM)
To explain ADMM we consider a more general problem comprising both tensor trace norm regularization and the regularizer we propose,
)
(
N
X
min E (W) + ?
? W(n)
(7)
W
n=1
where E(W) is an error term such as ky ? I(W)k22 and ? is a convex spectral function. It is
defined, for every matrix A, as
?(A) = ?(?(A))
where ? is a gauge function, namely a function which is symmetric and invariant under permutations. In particular, if ? is the ?1 norm then problem (7) corresponds to tensor trace norm regularization, whereas if ? = ???? it implements the proposed regularizer.
Problem (7) poses some difficulties because the terms under the summation are interdependent, due
to the different matricizations of W having the same elements rearranged in a different way. In
4
order to overcome this difficulty, the authors of [9, 22] proposed to use ADMM as a natural way to
decouple the regularization term appearing in problem (7). This strategy is based on the introduction
of N auxiliary tensors, B1 , . . . , BN ? Rp1 ?????pN , so that problem (7) can be reformulated as2
)
(
N
X
1
min
? Bn(n) : Bn = W, n ? [N ]
(8)
E (W) +
W,B1 ,...,BN
?
n=1
The corresponding augmented Lagrangian (see e.g. [4, 5]) is given by
N
X
?
1
2
? Bn(n) ? hAn , W ? Bn i + kW ? Bn k2 ,
L (W, B, A) = E (W) +
?
2
n=1
(9)
where h?, ?i denotes the scalar product between tensors, ? is a positive parameter and A1 , . . . AN ?
Rp1 ?????pN are the set of Lagrange multipliers associated with the constraints in problem (8).
ADMM is based on the following iterative scheme
W [i+1]
Bn[i+1]
An[i+1]
? argmin L W, B[i] , A[i]
W
? argmin L W [i+1] , B, A[i]
Bn
[i]
? An ? ?W [i+1] ? Bn[i+1] .
(10)
(11)
(12)
Step (12) is straightforward, whereas step (10) is described in [9]. Here we focus on the step (11)
since this is the only problem which involves function ?. We restate it with more explanatory
notations as
2
?
argmin ? Bn(n) ? An(n) , W(n) ? Bn(n) +
W(n) ? Bn(n) 2 .
2
Bn(n)
By completing the square in the right hand side, the solution of this problem is given by
?n(n) = prox 1 ? (X) := argmin 1 ? Bn(n) + 1
Bn(n) ? X
2 ,
B
2
?
?
2
Bn(n)
where X = W(n) ? ?1 An(n) . By using properties of proximity operators (see e.g. [2, Prop. 3.1]) we
know that if ? is a gauge function then
prox ?1 ? (X) = UX diag prox ?1 ? (?(X)) VX? ,
where UX and VX are the orthogonal matrices formed by the left and right singular vectors of
X, respectively. If we choose ? = k?k1 the associated proximity operator is the well-known soft
thresholding operator, that is, prox ?1 k?k1 (?) = v, where the vector v has components
1
.
vi = sign (?i ) |?i | ?
?
On the other hand, if we choose ? = ???? , we need to compute prox ?1 ???? . In the next section, we
describe a method to accomplish this task.
4.2
Computation of the Proximity Operator
To compute the proximity operator of the function ?1 ???? we will use several properties of proximity
calculus. First, we use the formula (see e.g. [7]) proxg? (x) = x ? proxg (x) for g ? = ?1 ???? . Next
we use a property of conjugate functions from [21, 13], which states that g(?) = ?1 ??? (??). Finally,
by the scaling property of proximity operators [7], we have that proxg (x) = ?1 prox???? (?x).
2
The somewhat cumbersome notation Bn(n) denotes the mode-n matricization of tensor Bn , that is,
Bn(n) = (Bn )(n) .
5
Algorithm 1 Computation of prox???? (y)
Input: y ? Rd , ?, ? > 0.
Output: w
? ? Rd .
Initialization: initial step ?0 = 12 , initial and best found solution w0 = w
? = PS (y) ? Rd .
for t = 1, 2, . . . do
?0
???
t
t?1
Find k such that k
? argmax
?r?d
?kw1:r k2? r : 0
t?1
t?1
? ? w1:k
1 + w??
? y1:k
w
?1:k ? w1:k
t?1
k 1:k
k2
t?1
t?1
w
?k+1:d ? wk+1:d
? ? wk+1:d
? yk+1:d
t
?
w ? PS (w)
?
If h(wt ) < h(w)
? then w
? ? wt
If ?Stopping Condition = True? then terminate.
end for
It remains to compute the proximity operator of a multiple of the function ??? in equation (13), that
is, for any ? > 0, y ? S, we wish to compute
prox???? (y) = argmin {h (w) : w ? S}
w
d
where we have defined S := {w ? R : w1 ? ? ? ? ? wd ? 0} and
1
d
2
h (w) = kw ? yk2 + ? max {? kw1:r k2 ? r} .
r=0
2
In order to solve this problem we employ the projected subgradient method, see e.g. [6]. It consists
in applying two steps at each iteration. First, it advances along a negative subgradient of the current
solution; second, it projects the resultant point onto the feasible set S. In fact, according to [6], it
is sufficient to compute an approximate projection, a step which we describe in Appendix D. To
d
compute a subgradient of h at w, we first find any integer k such that k ? argmax {? kw1:r k2 ? r}.
r=0
Then, we calculate a subgradient g of the function h at w by the formula
(
1 + kw??
wi ? yi , if i ? k,
k
1:k 2
gi =
w i ? yi ,
otherwise.
Now we have all the ingredients to apply the projected subgradient method, which is summarized
in Algorithm 1. In our implementation we stop the algorithm when an update of w
? is not made for
more than 102 iterations.
5
Experiments
We have conducted a set of experiments to assess whether there is any advantage of using the proposed regularizer over the tensor trace norm for tensor completion3 . First, we have designed a
synthetic experiment to evaluate the performance of both approaches under controlled conditions.
Then, we have tried both methods on two tensor completion real data problems. In all cases, we have
used a validation procedure to tune
the hyper-parameter ?, present in both approaches, among the
values 10j : j = ?7, ?6, . . . , 1 . In our proposed approach there is one further hyper-parameter,
?, to be specified. It should take the value of the Euclidean norm of the underlying tensor. Since
this is unknown, we propose to use the estimate
v
!N
#
u
u
Y
2
?
? = tkwk + (mean(w)2 + var(w))
p ?m ,
i
2
i=1
where m is the number of known entries and w ? Rm contains their values. This estimator assumes
that each value in the tensor is sampled from N (mean(w), var(w)), where mean(w) and var(w)
are the average and the variance of the elements in w.
3
The code is available at http://romera-paredes.com/code/tensor-completion
6
0.0115
0.011
3000
Tensor Trace Norm
Proposed Regularizer
2000
Seconds
RMSE
0.0105
0.01
1500
0.0095
1000
0.009
500
0.0085
?5
Tensor Trace Norm
Proposed Regularizer
2500
?4
?3
2
log ?
?2
0
?1
50
100
150
200
p
Figure 1: Synthetic dataset: (Left) Root Mean Squared Error (RMSE) of tensor trace norm and the
proposed regularizer. (Right) Running time execution for different sizes of the tensor.
5.1
Synthetic Dataset
We have generated a 3-order tensor W 0 ? R40?20?10 by the following procedure. First we generated a tensor W with ranks (12, 6, 3) using Tucker decomposition (see e.g. [16])
Wi1 ,i2 ,i3 =
6 X
3
12 X
X
j1 =1 j2 =1 j3 =1
(1)
(2)
(3)
Cj1 ,j2 ,j3 Mi1 ,j1 Mi2 ,j2 Mi3 ,j3 , (i1 , i2 , i3 ) ? [40] ? [20] ? [10]
where each entry of the Tucker decomposition components is sampled from the standard Gaussian
distribution N (0, 1). We then created the ground truth tensor W 0 by the equation
Wi01 ,i2 ,i3 =
Wi1 ,i2 ,i3 ? mean(W)
?
+ ?i1 ,i2 ,i3
N std(W)
where mean(W) and std(W) are the mean and standard deviation of the elements of W, N is
the total number of elements of W, and the ?i1 ,i2 ,i3 are i.i.d. Gaussian random variables with zero
mean and variance ? 2 . We have randomly sampled 10% of the elements of the tensor to compose
the training set, 45% for the validation set, and the remaining 45% for the test set. After repeating
this process 20 times, we report the average results in Figure 1 (Left). Having conducted a paired
t-test for each value of ? 2 , we conclude that the visible differences in the performances are highly
significant, obtaining always p-values less than 0.01 for ? 2 ? 10?2 .
Furthermore, we have conducted an experiment to test the running time of both approaches. We
have generated tensors W 0 ? Rp?p?p for different values of p ? {20, 40, . . . , 200}, following
the same procedure as outlined above. The results are reported in Figure 1 (Right). For low values
of p, the ratio between the running time of our approach and that of the trace norm regularization
method is quite high. For example in the lowest value tried for p in this experiment, p = 20, this
ratio is 22.661. However, as the volume of the tensor increases, the ratio quickly decreases. For
example, for p = 200, the running time ratio is 1.9113. These outcomes are expected because when
p is low, the most demanding routine in our
method is the one described in Algorithm 1, where
each iteration is of order O (p) and O p2 in the best and worst case, respectively. However, as
p increases the singular value decomposition routine, which iscommon to both methods, becomes
the most demanding because it has a time complexity O p3 [10]. Therefore, we can conclude
that even though our approach is slower than the trace norm based method, this difference becomes
much smaller as the size of the tensor increases.
5.2
School Dataset
The first real dataset we have tried is the Inner London Education Authority (ILEA) dataset. It is
composed of examination marks ranging from 0 to 70, of 15362 students who are described by a set
of attributes such as school and ethnic group. Most of these attributes are categorical, thereby we can
think of exam mark prediction as a tensor completion problem where each of the modes corresponds
to a categorical attribute. In particular, we have used the following attributes: school (139), gender
(2), VR-band (3), ethnic (11), and year (3), leading to a 5-order tensor W ? R139?2?3?11?3 .
7
42
Tensor Trace Norm
Proposed Regularizer
11.6
Tensor Trace Norm
Proposed Regularizer
40
38
11.4
RMSE
RMSE
36
11.2
11
34
32
30
10.8
28
10.6
10.4
26
24
4000
6000
8000
10000
m (Training Set Size)
12000
2
4
6
8
10
12
m (Training Set Size)
14
16
4
x 10
Figure 2: Root Mean Squared Error (RMSE) of tensor trace norm and the proposed regularizer for
ILEA dataset (Left) and Ocean video (Right).
We have selected randomly 5% of the instances to make the test set and another 5% of the instances
for the validation set. From the remaining instances, we have randomly chosen m of them for several
values of m. This procedure has been repeated 20 times and the average performance is presented
in Figure 2 (Left). There is a distinguishable improvement of our approach with respect to tensor
trace norm regularization for values of m > 7000. To check whether this gap is significant, we have
conducted a set of paired t-tests in this regime. In all these cases we obtained a p-value below 0.01.
5.3
Video Completion
In the second real-data experiment we have performed a video completion test. Any video can be
treated as a 4-order tensor: ?width? ? ?height? ? ?RGB? ? ?video length?, so we can use tensor
completion algorithms to rebuild a video from a few inputs, a procedure that can be useful for
compression purposes. In our case, we have used the Ocean video, available at [17]. This video
sequence can be treated as a tensor W ? R160?112?3?32 . We have randomly sampled m tensors
elements as training data, 5% of them as validation data, and the remaining ones composed the test
set. After repeating this procedure 10 times, we present the average results in Figure 2 (Right). The
proposed approach is noticeably better than the tensor trace norm in this experiment. This apparent
outcome is strongly supported by the paired t-tests which we run for each value of m, obtaining
always p-values below 0.01, and for the cases m > 5 ? 104 , we obtained p-values below 10?6 .
6
Conclusion
In this paper, we proposed a convex relaxation for the average of the rank of the matricizations of
a tensor. We compared this relaxation to a commonly used convex relaxation used in the context
of tensor completion, which is based on the trace norm. We proved that this second relaxation is
not tight and argued that the proposed convex regularizer may be advantageous. Our numerical
experience indicates that our method consistently improves in terms of estimation error over tensor
trace norm regularization, while being computationally comparable on the range of problems we
considered. In the future it would be interesting to study methods to speed up the computation of the
proximity operator of our regularizer and investigate its utility in tensor learning problems beyond
tensor completion such as multilinear multitask learning [20].
Acknowledgements
We wish to thank Andreas Argyriou, Raphael Hauser, Charles Micchelli and Marco Signoretto for
useful comments. A valuable contribution was made by one of the anonymous referees. Part of this
work was supported by EPSRC Grant EP/H017178/1, EP/H027203/1 and Royal Society International Joint Project 2012/R2.
8
References
[1] A. Argyriou, R. Foygel and N. Srebro. Sparse Prediction with the k-Support Norm. Advances in Neural
Information Processing Systems 25, pages 1466?1474, 2012.
[2] A. Argyriou, C.A. Micchelli, M. Pontil, L. Shen and Y. Xu. Efficient first order methods for linear composite regularizers. arXiv:1104.1436, 2011.
[3] R. Bhatia. Matrix Analysis. Springer Verlag, 1997.
[4] D.P. Bertsekas, J.N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall,
1989.
[5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1?
122, 2011.
[6] S. Boyd, L. Xiao, A. Mutapcic. Subgradient methods, Stanford University, 2003.
[7] P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering (H. H. Bauschke et al. Eds), pages 185?212,
Springer, 2011.
[8] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order
system approximation. Proc. American Control Conference, Vol. 6, pages 4734?4739, 2001.
[9] S. Gandy, B. Recht, I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 2011.
[10] G. H. Golub, C. F. Van Loan. Matrix Computations. 3rd Edition. Johns Hopkins University Press, 1996.
[11] Z. Harchaoui, M. Douze, M. Paulin, M. Dudik, J. Malick. Large-scale image classification with tracenorm regularization. IEEE Conference on Computer Vision & Pattern Recognition (CVPR), pages 3386?
3393, 2012.
[12] J-B. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms, Part I. Springer,
1996.
[13] J-B. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms, Part II. Springer,
1993.
[14] R.A. Horn and C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 2005.
[15] A. Karatzoglou, X. Amatriain, L. Baltrunas, N. Oliver. Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering. Proc. 4th ACM Conference on Recommender
Systems, pages 79?86, 2010.
[16] T.G. Kolda and B.W. Bade. Tensor decompositions and applications. SIAM Review, 51(3):455?500,
2009.
[17] J. Liu, P. Musialski, P. Wonka, J. Ye. Tensor completion for estimating missing values in visual data.
Proc. 12th International Conference on Computer Vision (ICCV), pages 2114?2121, 2009.
[18] Y. Nesterov. Gradient methods for minimizing composite objective functions. ECORE Discussion Paper,
2007/96, 2007.
[19] B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12:3413?
3430, 2009.
[20] B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze and M. Pontil. Multilinear multitask learning. Proc.
30th International Conference on Machine Learning (ICML), pages 1444?1452, 2013.
[21] N. Z. Shor. Minimization Methods for Non-differentiable Functions. Springer, 1985.
[22] M. Signoretto, Q. Tran Dinh, L. De Lathauwer, J.A.K. Suykens. Learning with tensors: a framework
based on convex optimization and spectral regularization. Machine Learning, to appear.
[23] M. Signoretto, R. Van de Plas, B. De Moor, J.A.K. Suykens. Tensor versus matrix completion: a comparison with application to spectral data. IEEE Signal Processing Letters, 18(7):403?406, 2011.
[24] N. Srebro, J. Rennie and T. Jaakkola. Maximum margin matrix factorization. Advances in Neural Information Processing Systems (NIPS) 17, pages 1329?1336, 2005.
[25] R. Tomioka, K. Hayashi, H. Kashima, J.S.T. Presto. Estimation of low-rank tensors via convex optimization. arXiv:1010.0789, 2010.
[26] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured Schatten norm regularization.
arXiv:1303.6370, 2013.
[27] R. Tomioka, T. Suzuki, K. Hayashi, H. Kashima. Statistical performance of convex tensor decomposition.
Advances in Neural Information Processing Systems (NIPS) 24, pages 972?980, 2013.
9
| 5077 |@word multitask:2 compression:1 advantageous:1 norm:48 paredes:3 open:1 calculus:1 tried:3 bn:21 decomposition:6 rgb:1 thereby:1 initial:2 liu:1 contains:1 romera:3 current:1 wd:1 com:1 si:2 chu:1 john:1 numerical:4 visible:1 j1:2 designed:1 update:1 rpn:1 selected:1 rp1:10 paulin:1 yamada:1 provides:2 authority:1 simpler:1 height:1 along:1 lathauwer:1 prove:2 consists:1 compose:1 introduce:1 manner:1 expected:1 indeed:1 p1:7 growing:1 cardinality:4 increasing:1 becomes:2 begin:1 project:2 notation:4 moreover:1 underlying:1 estimating:1 lowest:1 argmin:5 developed:1 finding:1 every:11 interactive:1 rm:2 k2:7 uk:4 control:1 unit:3 medical:1 grant:1 appear:1 arguably:1 bertsekas:1 positive:2 engineering:1 baltrunas:1 therein:1 resembles:1 initialization:1 equivalence:1 challenging:1 factorization:2 range:1 fazel:1 horn:1 implement:1 xr:1 procedure:6 pontil:4 significantly:2 composite:3 projection:1 boyd:3 onto:1 operator:11 prentice:1 context:3 applying:1 lagrangian:1 missing:1 straightforward:1 convex:38 formulate:1 shen:1 simplicity:1 splitting:1 recovery:1 insight:2 estimator:1 nuclear:2 notion:2 arranging:1 kolda:1 target:1 construction:1 element:8 referee:1 trend:1 recognition:1 std:2 ep:2 role:1 epsrc:1 mi3:1 worst:1 calculate:1 ordering:1 decrease:1 valuable:1 yk:1 matricizations:3 complexity:1 nesterov:1 solving:1 tight:4 upon:3 easily:1 joint:1 various:1 fiber:4 regularizer:20 massimiliano:1 describe:9 london:5 bhatia:1 hyper:2 outcome:2 quite:1 apparent:1 widely:1 solve:6 larger:1 stanford:1 heuristic:1 otherwise:3 cvpr:1 rennie:1 statistic:1 gi:1 mi2:1 think:1 advantage:1 sequence:1 differentiable:1 ucl:3 propose:5 douze:1 hiriart:2 product:1 raphael:1 ecore:1 tran:1 j2:3 plas:1 ktr:5 frobenius:1 ky:2 double:1 p:2 neumann:1 leave:1 develop:1 completion:19 pose:1 fixing:1 ac:2 exam:1 mutapcic:1 school:3 p2:4 auxiliary:1 c:2 involves:2 indicate:2 direction:6 radius:2 restate:1 attribute:4 vx:2 karatzoglou:1 education:1 noticeably:1 argued:1 generalization:1 preliminary:1 anonymous:1 proposition:1 tighter:1 multilinear:2 summation:1 extension:3 strictly:1 hold:3 proximity:9 marco:1 considered:1 ground:1 hall:1 proxg:3 claim:3 purpose:2 estimation:4 wi1:4 proc:4 combinatorial:1 largest:1 gauge:4 moor:1 unfolding:1 minimization:4 gaussian:2 always:2 aim:1 i3:6 pn:19 jaakkola:1 derived:1 focus:1 improvement:1 consistently:2 rank:21 check:1 indicates:1 sense:1 rebuild:1 stopping:1 gandy:1 bt:2 explanatory:1 i1:5 comprising:1 among:2 classification:1 denoted:3 k6:1 malick:1 field:1 equal:1 aware:1 having:2 kw:6 icml:1 future:2 report:3 others:1 employ:1 few:1 randomly:4 composed:3 argmax:2 freedom:1 interest:1 highly:1 investigate:1 golub:1 behind:3 regularizers:1 beforehand:1 oliver:1 encourage:1 experience:2 orthogonal:1 euclidean:6 instance:3 column:3 soft:1 introducing:1 deviation:1 subset:1 entry:3 afresh:1 euler:1 conducted:4 johnson:1 reported:1 bauschke:1 hauser:1 answer:1 accomplish:1 synthetic:5 proximal:1 recht:2 international:3 siam:1 together:1 quickly:1 hopkins:1 w1:3 von:1 squared:2 choose:5 resort:1 american:1 leading:1 return:1 pmin:10 prox:8 de:3 summarized:2 wk:2 student:1 satisfy:1 vi:1 script:1 root:2 performed:1 sup:3 recover:1 parallel:1 rmse:5 collaborative:2 contribution:2 square:1 formed:1 ass:1 variance:2 who:1 explain:2 cumbersome:1 ed:1 underestimate:1 tucker:2 resultant:1 associated:8 proof:3 rational:1 stop:1 dataset:8 proved:2 sampled:4 recall:1 improves:3 organized:1 routine:2 musialski:1 higher:2 methodology:3 specify:1 though:2 strongly:1 furthermore:3 hand:2 mode:8 aung:1 k22:2 ye:1 multiplier:6 true:1 regularization:21 alternating:5 symmetric:2 i2:6 during:1 width:1 encourages:1 prominent:1 ranging:2 image:1 parikh:1 charles:1 common:1 volume:1 kwk2:2 measurement:3 refer:1 significant:2 cambridge:1 dinh:1 rd:9 outlined:1 tracenorm:1 centre:2 kw1:3 han:1 yk2:1 recent:2 route:1 verlag:1 life:1 yi:2 seen:1 minimum:1 somewhat:1 dudik:1 signal:2 ii:1 multiple:1 harchaoui:1 stem:2 cross:1 long:1 r40:1 a1:1 controlled:1 paired:3 j3:3 prediction:2 vision:3 arxiv:3 iteration:3 suykens:2 whereas:3 singular:4 envelope:8 comment:1 subject:1 kwk22:1 integer:2 pesquet:1 shor:1 reduce:1 idea:1 inner:1 andreas:1 whether:3 utility:1 reformulated:1 proceed:1 repeatedly:1 useful:2 detailed:1 involve:1 tune:1 repeating:2 extensively:1 band:1 rearranged:1 http:1 sign:1 shall:1 vol:1 group:1 key:2 imaging:1 relaxation:16 subgradient:7 year:2 sum:1 run:1 inverse:2 letter:3 place:2 p3:1 appendix:7 scaling:1 comparable:1 bound:5 completing:1 constraint:1 as2:1 cj1:1 speed:1 min:6 prescribed:1 department:2 structured:1 according:1 ball:8 conjugate:3 smaller:1 wi:2 amatriain:1 invariant:3 iccv:1 computationally:3 equation:7 agree:1 remains:1 foygel:1 discus:1 know:1 urruty:2 tractable:2 end:2 presto:1 available:2 tightest:1 apply:1 spectral:11 ocean:2 appearing:1 kashima:2 alternative:4 slower:1 rp:1 jn:2 existence:1 denotes:3 remaining:5 assumes:1 running:4 wc1e:2 k1:5 build:2 society:1 tensor:96 micchelli:2 objective:1 question:2 strategy:1 usual:1 gradient:1 thank:1 card:2 schatten:1 w0:1 topic:1 boldface:1 code:2 length:1 index:2 ratio:4 minimizing:1 unfortunately:1 trace:36 negative:3 stated:1 wonka:1 pmax:2 implementation:1 unknown:1 bianchi:1 upper:1 recommender:1 observation:4 datasets:2 y1:1 multiverse:1 thm:1 peleato:1 namely:4 required:1 specified:1 eckstein:1 nip:2 beyond:1 below:4 pattern:1 regime:1 summarize:1 max:3 royal:1 video:8 analogue:1 demanding:2 difficulty:3 natural:3 disturbance:1 examination:1 indicator:1 treated:2 hindi:1 representing:1 scheme:1 mi1:1 created:1 ready:2 categorical:2 review:1 interdependent:1 acknowledgement:1 loss:1 reordering:1 highlight:2 permutation:1 interesting:1 limitation:2 filtering:2 srebro:2 var:3 ingredient:1 versus:1 validation:5 straighforward:1 foundation:1 degree:1 sufficient:2 proxy:1 xiao:1 thresholding:1 berthouze:1 row:1 echal:2 supported:2 tsitsiklis:1 side:1 absolute:1 sparse:1 distributed:2 van:2 overcome:1 author:3 collection:1 made:2 coincide:1 projected:2 commonly:1 suzuki:2 approximate:1 b1:2 conclude:2 xi:1 iterative:1 matricization:7 nature:1 terminate:1 obtaining:2 diag:1 pk:1 main:2 noise:1 edition:1 repeated:1 xu:1 augmented:1 ethnic:2 referred:1 vr:1 combettes:1 tomioka:3 wish:2 kxk2:1 hw:2 theorem:1 formula:2 showing:1 ilea:2 r2:1 exists:2 execution:1 margin:1 gap:1 distinguishable:1 simply:1 infinitely:2 visual:1 conveniently:1 bernardino:1 lagrange:1 signoretto:3 ux:2 g2:5 scalar:1 recommendation:1 hayashi:2 springer:5 gender:1 corresponds:3 truth:1 acm:1 prop:1 consequently:1 lemar:2 admm:5 feasible:1 loan:1 specifically:1 wt:2 decouple:1 lemma:10 total:1 college:2 mark:2 support:1 evaluate:1 argyriou:3 |
4,506 | 5,078 | Latent Maximum Margin Clustering
Guang-Tong Zhou, Tian Lan, Arash Vahdat, and Greg Mori
School of Computing Science
Simon Fraser University
{gza11,tla58,avahdat,mori}@cs.sfu.ca
Abstract
We present a maximum margin framework that clusters data using latent variables. Using latent representations enables our framework to model unobserved
information embedded in the data. We implement our idea by large margin learning, and develop an alternating descent algorithm to effectively solve the resultant
non-convex optimization problem. We instantiate our latent maximum margin
clustering framework with tag-based video clustering tasks, where each video is
represented by a latent tag model describing the presence or absence of video tags.
Experimental results obtained on three standard datasets show that the proposed
method outperforms non-latent maximum margin clustering as well as conventional clustering approaches.
1
Introduction
Clustering is a major task in machine learning and has been extensively studied over decades of
research [11]. Given a set of observations, clustering aims to group data instances of similar structures or patterns together. Popular clustering approaches include the k-means algorithm [7], mixture
models [22], normalized cuts [27], and spectral clustering [18]. Recent progress has been made
using maximum margin clustering (MMC) [32], which extends the supervised large margin theory
(e.g. SVM) to the unsupervised scenario. MMC performs clustering by simultaneously optimizing cluster-specific models and instance-specific labeling assignments, and often generates better
performance than conventional methods [33, 29, 37, 38, 16, 6].
Modeling data with latent variables is common in many applications. Latent variables are often
defined to have intuitive meaning, and are used to capture unobserved semantics in the data. As
compared with ordinary linear models, latent variable models feature the ability to exploit a richer
representation of the space of instances. Thus, they often achieve superior performance in practice.
In computer vision, this is best exemplified by the success of deformable part models (DPMs) [5]
for object detection. DPMs enhance the representation of an object class by capturing viewpoint and
pose variations. They utilize a root template describing the entire object appearance and several part
templates. Latent variables are used to capture deformations and appearance variations of the root
template and parts. DPMs perform object detection via search for the best locations of the root and
part templates.
Latent variable models are often coupled with supervised learning to learn models incorporating the
unobserved variables. For example, DPMs are learned in a latent SVM framework [5] for object
detection; similar models have been shown to improve human action recognition [31]. A host of
other applications of latent SVMs have obtained state-of-the-art performance in computer vision.
Motivated by their success in supervised learning, we believe latent variable models can also help
in unsupervised clustering ? data instances with similar latent representations should be grouped
together in one cluster.
As the latent variables are unobserved in the original data, we need a learning framework to handle
this latent knowledge. To implement this idea, we develop a novel clustering algorithm based on
MMC that incorporates latent variables ? we call this latent maximum margin clustering (LMMC).
The LMMC algorithm results in a non-convex optimization problem, for which we introduce an
1
iterative alternating descent algorithm. Each iteration involves three steps: inferring latent variables
for each sample point, optimizing cluster assignments, and updating cluster model parameters.
To evaluate the efficacy of this clustering algorithm, we instantiate LMMC for tag-based video
clustering, where each video is modeled with latent variables controlling the presence or absence
of a set of descriptive tags. We conduct experiments on three standard datasets: TRECVID MED
11 [19], KTH Actions [26] and UCF Sports [23], and show that LMMC outperforms non-latent
MMC and conventional clustering methods.
The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 formulates
the LMMC framework in detail. We describe tag-based video clustering in Section 4, followed by
experimental results reported in Section 5. Finally, Section 6 concludes this paper.
2
Related Work
Latent variable models. There has been much work in recent years using latent variable models. The definition of latent variables are usually task-dependent. Here we focus on the learning
part only. Andrews et al. [1] propose multiple-instance SVM to learn latent variables in positive
bags. Felzenszwalb et al. [5] formulate latent SVM by extending binary linear SVM with latent
variables. Yu and Joachims [36] handle structural outputs with latent structural SVM. This model
is also known as maximum margin hidden conditional random fields (MMHCRF) [31]. Kumar et
al. [14] propose self-paced learning, an optimization strategy that focuses on simple models first.
Yang et al. [35] kernelize latent SVM for better performance. All of this work demonstrates the
power of max-margin latent variable models for supervised learning; our framework conducts unsupervised clustering while modeling data with latent variables.
Maximum margin clustering. MMC was first proposed by Xu et al. [32] to extend supervised large
margin methods to unsupervised clustering. Different from the supervised case, where the optimization is convex, MMC results in non-convex problems. To solve it, Xu et al. [32] and Valizadegan
and Rong [29] reformulate the original problem as a semi-definite programming (SDP) problem.
Zhang et al. [37] employ alternating optimization ? finding labels and optimizing a support vector
regression (SVR). Li et al. [16] iteratively generate the most violated labels, and combine them via
multiple kernel learning. Note that the above methods can only solve binary-cluster clustering problems. To handle the multi-cluster case, Xu and Schuurmans [33] extends the SDP method in [32].
Zhao et al. [38] propose a cutting-plane method which uses the constrained convex-concave procedure (CCCP) to relax the non-convex constraint. Gopalan and Sankaranarayanan [6] examine data
projections to identify the maximum margin. Our framework deals with multi-cluster clustering,
and we model data instances with latent variables to exploit rich representations. It is also worth
mentioning that MMC leads naturally to the semi-supervised SVM framework [12] by assuming a
training set of labeled instances [32, 33]. Using the same idea, we could extend LMMC to semisupervised learning.
MMC has also shown its success in various computer vision applications. For example, Zhang
et al. [37] conduct MMC based image segmentation. Farhadi and Tabrizi [4] find different view
points of human activities via MMC. Wang and Cao [30] incorporate MMC to discover geographical
clusters of beach images. Hoai and Zisserman [8] form a joint framework of maximum margin
classification and clustering to improve sub-categorization.
Tag-based video analysis. Tagging videos with relevant concepts or attributes is common in video
analysis. Qi et al. [20] predict multiple correlative tags in a structural SVM framework. Yang and
Toderici [34] exploit latent sub-categories of tags in large-scale videos. The obtained tags can assist
in recognition. For example, Liu et al. [17] use semantic attributes (e.g. up-down motion, torso
motion, twist) to recognize human actions (e.g. walking, hand clapping). Izadinia and Shah [10]
model low-level event tags (e.g. people dancing, animal eating) as latent variables to recognize
complex video events (e.g. wedding ceremony, grooming animal).
Instead of supervised recognition of tags or video categories, we focus on unsupervised tag-based
video clustering. In fact, recently research collects various sources of tags for video clustering.
Schroff et al. [25] cluster videos by the capturing locations. Hsu et al. [9] build hierarchical clustering using user-contributed comments. Our paper uses latent tag models, and our LMMC framework
is general enough to handle various types of tags.
2
3
Latent Maximum Margin Clustering
As stated above, modeling data with latent variables can be beneficial in a variety of supervised
applications. For unsupervised clustering, we believe it also helps to group data instances based on
latent representations. To implement this idea, we propose the LMMC framework.
LMMC models instances with latent variables. When fitting an instance to a cluster, we find the
optimal values for latent variables and use the corresponding latent representation of the instance.
To best fit different clusters, an instance is allowed to flexibly take different latent variable values
when being compared to different clusters. This enables LMMC to explore a rich latent space when
forming clusters. Note that in conventional clustering algorithms, an instance is usually restricted to
have the same representation in all clusters. Furthermore, as the latent variables are unobserved in
the original data, we need a learning framework to exploit this latent knowledge. Here we develop a
large margin learning framework based on MMC, and learn a discriminative model for each cluster.
The resultant LMMC optimization is non-convex, and we design an alternating descent algorithm to
approximate the solution. Next we will briefly introduce MMC in Section 3.1, followed by detailed
descriptions of the LMMC framework and optimization respectively in Sections 3.2 and 3.3.
3.1
Maximum Margin Clustering
MMC [32, 37, 38] extends the maximum margin principle popularized by supervised SVMs to
unsupervised clustering, where the input instances are unlabeled. The idea of MMC is to find a
labeling so that the margin obtained would be maximal over all possible labelings. Suppose there
are N instances {xi }N
i=1 to be clustered into K clusters, MMC is formulated as follows [33, 38]:
min
W,Y,??0
s.t.
K
N K
1X
C XX
||wt ||2 +
?ir
2 t=1
K i=1 r=1
K
X
(1)
yit wt> xi ? wr> xi ? 1 ? yir ? ?ir , ?i, r
t=1
yit ? {0, 1}, ?i, t
K
X
yit = 1, ?i
t=1
where W = {wt }K
t=1 are the linear model parameters for each cluster, ? = {?ir } (i ? {1, . . . , N },
t ? {1, . . . , K}) are the slack variables to allow soft margin, and C is a trade-off parameter. We
denote the labeling assignment by Y = {yit } (i ? {1, . . . , N }, t ? {1, . . . , K}), where yit = 1
indicates that the instance xi is clustered into the t-th cluster, and yit = 0 otherwise. By convention,
we require that each instance is assigned to one and only one cluster, i.e. the last constraint in Eq. 1.
Moreover, the first constraint in Eq. 1 enforces a large margin between clusters by constraining that
the score of xi to the assigned cluster is sufficiently larger than the score of xi to any other clusters.
Note that MMC is an unsupervised clustering method, which jointly estimates the model parameters
W and finds the best labeling Y.
Enforcing balanced clusters. Unfortunately, solving Eq. 1 could end up with trivial solutions
where all instances are simply assigned to the same cluster, and we obtain an unbounded margin. To
address this problem, we add cluster balance constraints to Eq. 1 that require Y to satisfy
L?
N
X
yit ? U, ?t
(2)
i=1
where L and U are the lower and upper bounds controlling the size of a cluster. Note that we explicitly enforce cluster balance using a hard constraint on the cluster sizes. This is different from [38], a
representative multi-cluster MMC method, where the cluster balance constraints are implicitly imPN
>
posed on the accumulated model scores (i.e.
i=1 wt xi ). We found empirically that explicitly
enforcing balanced cluster sizes led to better results.
3.2
Latent Maximum Margin Clustering
We now extend MMC to include latent variables. The latent variable of an instance is clusterspecific. Formally, we denote h as the latent variable of an instance x associated to a cluster parameterized by w. Following the latent SVM formulation [5, 36, 31], scoring x w.r.t. w is to solve an
3
inference problem of the form:
fw (x) = max w> ?(x, h)
(3)
h
where ?(x, h) is the feature vector defined for the pair of (x, h). To simplify the notation, we
assume the latent variable h takes its value from a discrete set of labels. However, our formulation
can be easily generalized to handle more complex latent variables (e.g. graph structures [36, 31]).
To incorporate the latent variable models into clustering, we replace the linear model w> x in Eq. 1 by the latent variable model fw (x). We call the resultant framework latent maximum margin
clustering (LMMC). LMMC finds clusters via the following optimization:
K
N K
1X
C XX
||wt ||2 +
?ir
2 t=1
K i=1 r=1
min
W,Y,??0
K
X
s.t.
(4)
yit fwt (xi ) ? fwr (xi ) ? 1 ? yir ? ?ir , ?i, r
t=1
yit ? {0, 1}, ?i, t
K
X
yit = 1, ?i
L?
N
X
t=1
yit ? U, ?t
i=1
We adopt the notation Y from the MMC formulation to denote the labeling assignment. Similar to
MMC, the first constraint in Eq. 4 enforces the large margin criterion where the score of fitting xi
to the assigned cluster is marginally larger than the score of fitting xi to any other clusters. Cluster
balance is enforced by the last constraint in Eq. 4. Note that LMMC jointly optimizes the model
parameters W and finds the best labeling assignment Y, while inferring the optimal latent variables.
3.3
Optimization
It is easy to verify that the optimization problem described in Eq. 4 is non-convex due to the
optimization over the labeling assignment variables Y and the latent variables H = {hit } (i ?
{1, . . . , N }, t ? {1, . . . , K}). To solve it, we first eliminate the slack variables ?, and rewrite Eq. 4
equivalently as:
K
C
1X
||wt ||2 + R(W)
(5)
min
W 2
K
t=1
where R(W) is the risk function defined by:
R(W)
=
s.t.
min
Y
K
N X
X
max 0, 1 ? yir + fwr (xi ) ?
i=1 r=1
yit ? {0, 1}, ?i, t
K
X
yit fwt (xi )
(6)
t=1
K
X
yit = 1, ?i
L?
t=1
N
X
yit ? U, ?t
i=1
Note that Eq. 5 minimizes over the model parameters W, and Eq. 6 minimizes over the labeling
assignment variables Y while inferring the latent variables H. We develop an alternating descent
algorithm to find an approximate solution. In each iteration, we first evaluate the risk function R(W)
given the current model parameters W, and then update W with the obtained risk value. Next we
describe each step in detail.
Risk evaluation: The first step of learning is to compute the risk function R(W) with the model
parameters W fixed. We first infer the latent variables H and then optimize the labeling assignment
Y. According to Eq. 3, the latent variable hit of an instance xi associated to cluster t can be obtained
via: argmaxhit wt> ?(xi , hit ). Note that the inference problem is task-dependent. For our latent tag
model, we present an efficient inference method in Section 4.
After obtaining the latent variables H, we optimize the labeling assignment Y from Eq. 6. Intuitively, this is to minimize the total risk of labeling all instances yet maintaining the cluster balance
constraints. We reformulate Eq. 6 as an integer linear programming (ILP) problem by introducing a
variable ?it to capture the risk of assigning an instance xi to a cluster t. The ILP can be written as:
R(W) = min
Y
N X
K
X
?it yit
s.t. yit ? {0, 1}, ?i, t
i=1 t=1
K
X
t=1
4
yit = 1, ?i
L?
N
X
i=1
yit ? U, ?t (7)
Cluster: board trick
video
Cluster: feeding animal
T : board car dog food grass man snow tree ? ? ?
h: 0
0 1
1
1
1
0
1 ???
board car dog food grass man snow tree ? ? ?
1
0 0
0
1
1
0
0 ???
Figure 1: Two videos represented by the latent tag model. Please refer to the text for details about T
and h. Note that the cluster labels (i.e. ?feeding animal?, ?board trick?) are unknown beforehand.
They are added for a better understanding of the video content and the latent tag representations.
PK
where ?it = r=1,r6=t max(0, 1 + fwr (xi ) ? fwt (xi )). This captures the total ?mis-clustering?
penalties - suppose that we regard t as the ?ground truth? cluster label for an instance xi , then ?it
measures the sum of hinge losses for all incorrect predictions r (r 6= t), which is consistent with
the supervised multi-class SVM at a higher level [2]. Eq. 7 is a standard ILP problem with N ? K
variables and N + K constraints. We use the GNU Linear Programming Kit (GLPK) to obtain an
approximate solution to this problem.
Updating W: The next step of learning is the optimization over the model parameters W (Eq. 5).
The learning problem is non-convex and we use the the non-convex bundle optimization solver
in [3]. In a nutshell, this method builds a piecewise quadratic approximation to the objective function
of Eq. 5 by iteratively adding a linear cutting plane at the current optimum and updating the optimum.
Now the key issue is to compute the subgradient ?wt fwt (xi ) for a particular wt . Let h?it be the
optimal solution to the inference problem: h?it = argmaxhit wt> ?(xi , hit ). Then the subgradient
can be calculated as ?wt fwt (xi ) = ?(xi , h?it ). Using the subgradient ?wt fwt (xi ), we optimize
Eq. 5 by the algorithm in [3].
4
Tag-Based Video Clustering
In this section, we introduce an application of LMMC: tag-based video clustering. Our goal is
to jointly learn video clusters and tags in a single framework. We treat tags of a video as latent
variables and capture the correlations between clusters and tags. Intuitively, videos with a similar
set of tags should be assigned to the same cluster. We assume a separate training dataset consisting of
videos with ground-truth tag labels exists, from which we train tag detectors independently. During
clustering, we are given a set of new videos without the ground-truth tag labels, and our goal is to
assign cluster labels to these videos.
We employ a latent tag model to represent videos. We are particularly interested in tags which
describe different aspects of videos. For example, a video from the cluster ?feeding animal? (see
Figure 1) may be annotated with ?dog?, ?food?, ?man?, etc. Assume we collect all the tags in a
set T . For a video being assigned to a particular cluster, we know it could have a number of tags
from T describing its visual content related to the cluster. However, we do not know which tags are
present in the video. To address this problem, we associate latent variables to the video to denote
the presence and absence of tags.
Formally, given a cluster parameterized by w, we associate a latent variable h to a video x, where
h = {ht }t?T and ht ? {0, 1} is a binary variable denoting the presence/absence of each tag t.
ht = 1 means x has the tag t, while ht = 0 means x does not have the tag t. Figure 1 shows the
latent tag representations of two sample videos. We score the video x according to the model in
Eq. 3: fw (x) = maxh w> ?(x, h), where the potential function w> ?(x, h) is defined as follows:
w> ?(x, h) =
1 X
ht ? ?t> ?t (x)
|T |
(8)
t?T
This potential function measures the compatibility between the video x and tag t associated with
the current cluster. Note that w = {?t }t?T are the cluster-specific model parameters, and ? =
{ht ? ?t (x)}t?T is the feature vector depending on the video x and its tags h. Here ?t (x) ? Rd is
the feature vector extracted from the video x, and the parameter ?t is a template for tag t. In our
current implementation, instead of keeping ?t (x) as a high dimensional vector of video features, we
5
simply represent it as a scalar score of detecting tag t on x by a pre-trained binary tag detector. To
learn biases between different clusters, we append a constant 1 to make ?t (x) two-dimensional.
Now we describe how to infer the latent variable h? = argmaxh w> ?(x, h). As there is no dependency between tags, we can infer each latent variable separately. According to Eq. 8, the term
corresponding to tag t is ht ? ?t> ?t (x). Considering that ht is binary, we set ht to 1 if ?t> ?t (x) > 0;
otherwise, we set ht to 0.
5
Experiments
We evaluate the performance of our method on three standard video datasets: TRECVID MED
11 [19], KTH Actions [26] and UCF Sports [23]. We briefly describe our experimental setup before
reporting the experimental results in Section 5.1.
TRECVID MED 11 dataset [19]: This dataset contains web videos collected by the Linguistic
Data Consortium from various web video hosting sites. There are 15 complex event categories
including ?board trick?, ?feeding animal?, ?landing fish?, ?wedding ceremony?, ?woodworking
project?, ?birthday party?, ?changing tire?, ?flash mob?, ?getting vehicle unstuck?, ?grooming
animal?, ?making sandwich?, ?parade?, ?parkour?, ?repairing appliance?, and ?sewing project?.
TRECVID MED 11 has three data collections: Event-Kit, DEV-T and DEV-O. DEV-T and DEV-O
are dominated by videos of the null category, i.e. background videos that do not contain the events
of interest. Thus, we use the Event-Kit data collection in the experiments. By removing 13 short
videos that contain no visual content, we finally have a total of 2,379 videos for clustering.
We use tags that were generated in Vahdat and Mori [28] for the TRECVID MED 11 dataset. Specifically, this dataset includes ?judgment files? that contain a short one-sentence description for each
video. A sample description is: ?A man and a little boy lie on the ground after the boy has fallen
off his bike?. This sentence provides us with information about presence of objects such as ?man?,
?boy?, ?ground? and ?bike?, which could be used as tags. In [28], text analysis tools are employed
to extract binary tags based on frequent nouns in the judgment files. Examples of 74 frequent tags
used in this work are: ?music?, ?person?, ?food?, ?kitchen?, ?bird?, ?bike?, ?car?, ?street?, ?boat?,
?water?, etc. The complete list of tags are available on our website.
To train tag detectors, we use the DEV-T and DEV-O videos that belong to the 15 event categories.
There are 1675 videos in total. We extract HOG3D descriptors [13] and form a 1,000 word codebook. Each video is then represented by a 1,000-dimensional feature vector. We train a linear SVM
for each tag, and predict the detection scores on the Event-Kit videos. To remove biases between tag
detectors, we normalize the detection scores by z-score normalization. Note that we make no use of
the ground-truth tags on the Event-Kit videos that are to be clustered.
KTH Actions dataset [26]: This dataset contains a total of 599 videos of 6 human actions: ?walking?, ?jogging?, ?running?, ?boxing?, ?hand waving?, and ?hand clapping?. Our experiments use
all the videos for clustering.
We use Action Bank [24] to generate tags for this dataset. Action Bank has 205 template actions
with various action semantics and viewpoints. Randomly selected examples of template actions
are: ?hula1?, ?ski5?, ?clap3?, ?fence2?, ?violin6?, etc. In our experiments, we treat the template
actions as tags. Specifically, on each video and for each template action, we use the set of Action
Bank action detection scores collected at different spatiotemporal scales and correlation volumes.
We perform max-pooling on the scores to obtain the corresponding tag detection score. Again, for
each tag, we normalize the detection scores by z-score normalization.
UCF Sports dataset [23]: This dataset consists of 140 videos from 10 action classes: ?diving?, ?golf
swinging?, ?kicking?, ?lifting?, ?horse riding?, ?running?, ?skating?, ?swinging (on the pommel
horse)?, ?swinging (at the high bar)?, and ?walking?. We use all the videos for clustering. The tags
and tag detection scores are generated from Action Bank, in the same way as KTH Actions.
Baselines: To evaluate the efficacy of LMMC, we implement three conventional clustering methods
for comparison, including the k-means algorithm (KM), normalized cut (NC) [27], and spectral
clustering (SC) [18]. For NC, the implementation and parameter settings are the same as [27],
which uses a Gaussian similarity function with all the instances considered as neighbors. For SC,
we use a 5-nearest neighborhood graph and set the width of the Gaussian similarity function as the
6
LMMC
MMC
SC
KM
NC
TRECVID MED 11
PUR NMI
RI
FM
39.0
28.7 89.5 22.1
36.0
26.6 89.3 20.3
28.6
23.6 87.1 20.3
27.0
23.8 85.9 20.4
12.9
5.7
31.6 12.7
PUR
92.5
91.3
61.0
64.8
48.0
KTH Actions
NMI
RI
87.0 95.8
86.5 95.2
60.8 75.6
60.7 84.0
33.9 72.9
FM
87.2
85.5
58.2
60.6
35.1
PUR
76.4
63.6
69.9
63.1
60.7
UCF Sports
NMI
RI
71.2 92.0
62.2 89.2
70.8 90.6
66.2 87.9
55.8 83.4
FM
60.0
46.1
58.1
58.7
41.8
Table 1: Clustering results (in %) on the three datasets. The figures boldfaced are the best performance among all the compared methods.
average distance over all the 5-nearest neighbors. Note that these three methods do not use latent
variable models. Therefore, for a fair comparison with LMMC, they are directly performed on the
data where each video is represented by a vector of tag detection scores. We have also tried KM,
NC and SC on the 1,000-dimensional HOG3D features. However, the performance is worse and is
not reported here. Furthermore, to mitigate the effect of randomness, KM, NC and SC are run 10
times with different initial seeds and the average results are recorded in the experiments.
In order to show the benefits of incorporating latent variables, we further develop a baseline called
MMC by replacing the latent variable model fw (x) in Eq. 4 with a linear model w> x. This is equivalent to running an ordinary maximum margin clustering algorithm on the video data represented by
tag detection scores. For a fair comparison, we use the same solver for learning MMC and LMMC.
The trade-off parameter C in Eq. 4 is selected as the best from the range {101 , 102 , 103 }. The lower
N
bound and upper bounds of the cluster-balance constraint (i.e. L and U in Eq. 4) are set as 0.9 K
N
respectively to enforce balanced clusters.
and 1.1 K
Performance measures: Following the convention of maximum margin clustering [32, 33, 29,
37, 38, 16, 6], we set the number of clusters to be the ground-truth number of classes for all the
compared methods. The clustering quality is evaluated by four standard measurements including
purity (PUR) [32], normalized mutual information (NMI) [15], Rand index (RI) [21] and balanced
F-measure (FM). They are employed to assess different aspects of a given clustering: PUR measures the accuracy of the dominating class in each cluster; NMI is from the information-theoretic
perspective and calculates the mutual dependence of the predicted clustering and the ground-truth
partitions; RI evaluates true positives within clusters and true negatives between clusters; and FM
considers both precision and recall. The higher the four measures, the better the performance.
5.1
Results
The clustering results are listed in Table 1. It shows that LMMC consistently outperforms the MMC
baseline and conventional clustering methods on all three datasets. Specifically, by incorporating
latent variables, LMMC improves the MMC baseline by 3% on TRECVID MED 11, 1% on KTH
Actions, and 13% on UCF Sports respectively, in terms of PUR. This demonstrates that learning the
latent presence and absence of tags can exploit rich representations of videos, and boost clustering
performance. Moreover, LMMC performs better than the three conventional methods, SC, KM and
NC, showing the efficacy of the proposed LMMC framework for unsupervised data clustering.
Note that MMC runs on the same non-latent representation as the three conventional methods, SC,
KM and NC. However, MMC outperforms them on the two largest datasets, TRECVID MED 11
and KTH Actions, and is comparable with them on UCF Sports. This provides evidence for the
effectiveness of maximum margin clustering as well as the proposed alternating descent algorithm
for optimizing the non-convex objective.
Visualization: We select four clusters from TRECVID MED 11, and visualize the results in Figure 2. Please refer to the caption for more details.
6
Conclusion
We have presented a latent maximum margin framework for unsupervised clustering. By representing instances with latent variables, our method features the ability to exploit the unobserved
information embedded in data. We formulate our framework by large margin learning, and an alter7
Cluster: woodworking project
Cluster: birthday party
4
Tags: piece, wood, machine, lady, indoors, man,
kitchen, baby
4
Tags: party, birthday, restaurant, couple, wedding
ceremony, wedding, ceremony, indoors
4
Tags: piece, man, wood, baby, hand, machine, lady, kitchen
4
Tags: birthday, party, restaurant, family, child,
wedding ceremony, wedding, couple
4
Tags: wood, piece, baby, indoors, hand, man, lady,
bike
Cluster: parade
4
Tags: party, birthday, restaurant, child, family,
wedding ceremony, chicken, couple
Cluster: landing fish
8
8
Tags: city, day, year, Chinese, Christmas, people,
lot, group
Tags: fish, fishing, boat, man, beach, line, water,
woman
4
Tags: day, street, lot, Chinese, year, line, Christmas, dance
4
Tags: boat, beach, fish, man, men, group, water,
woman
4
Tags: street, day, lot, Chinese, line, year, dancing,
dance
4
Tags: fish, beach, boat, men, man, chicken, truck,
move
Figure 2: Four sample clusters from TRECVID MED 11. We label each cluster by the dominating
video class, e.g. ?woodworking project?, ?parade?, and visualize the top-3 scored videos. A ?4?
sign indicates that the video label is consistent with the cluster label; otherwise, a ?8? sign is used.
The two ?mis-clustered? videos are on ?parkour? (left) and ?feeding animal? (right). Below each
video, we show the top eight inferred tags sorted by the potential calculated from Eq. 8.
nating descent algorithm is developed to solve the resultant non-convex objective. We instantiate our
framework with tag-based video clustering, where each video is represented by a latent tag model
with latent presence and absence of video tags. Our experiments conducted on three standard video
datasets validate the efficacy of the proposed framework. We believe our solution is general enough
to be applied in other applications with latent representations, e.g. video clustering with latent key
segments, image clustering with latent region-of-interest, etc. It would also be interesting to extend
our framework to semi-supervised learning by assuming a training set of labeled instances.
Acknowledgments
This work was supported by a Google Research Award, NSERC, and the Intelligence Advanced
Research Projects Activity (IARPA) via Department of Interior National Business Center contract
number D11PC20069. The U.S. Government is authorized to reproduce and distribute reprints for
Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views
and conclusions contained herein are those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/NBC,
or the U.S. Government.
8
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning.
In NIPS, 2002.
[2] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265?292, 2001.
[3] T. M. T. Do and T. Arti`eres. Large margin training for hidden Markov models with partially observed
states. In ICML, 2009.
[4] A. Farhadi and M. K. Tabrizi. Learning to recognize activities from the wrong view point. In ECCV,
2008.
[5] P. F. Felzenszwalb, D. A. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable
part model. In CVPR, 2008.
[6] R. Gopalan and J. Sankaranarayanan. Max-margin clustering: Detecting margins from projections of
points on lines. In CVPR, 2011.
[7] J. A. Hartigan and M. A. Wong. A k-means clustering algorithm. Applied Statistics, 28:100?108, 1979.
[8] M. Hoai and A. Zisserman. Discriminative sub-categorization. In CVPR, 2013.
[9] C.-F. Hsu, J. Caverlee, and E. Khabiri. Hierarchical comments-based clustering. In SAC, 2011.
[10] H. Izadinia and M. Shah. Recognizing complex events using large margin joint low-level event model. In
ECCV, 2012.
[11] A. Jain and R. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.
[12] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999.
[13] A. Kl?aser, M. Marszalek, and C. Schmid. A spatio-temporal descriptor based on 3d-gradients. In BMVC,
2008.
[14] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS, 2010.
[15] T. O. Kvalseth. Entropy and correlation: Some comments. IEEE Transactions on Systems, Man and
Cybernetics, 17(3):517?519, 1987.
[16] Y.-F. Li, I. W. Tsang, J. T.-Y. Kwok, and Z.-H. Zhou. Tighter and convex maximum margin clustering. In
AISTATS, 2009.
[17] J. Liu, B. Kuipers, and S. Savarese. Recognizing human actions by attributes. In CVPR, 2011.
[18] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2001.
[19] P. Over, G. Awad, J. Fiscus, A. F. Smeaton, W. Kraaij, and G. Quenot. TRECVID 2011 ? an overview of
the goals, tasks, data, evaluation mechanisms and metrics. In TRECVID, 2011.
[20] G.-J. Qi, X.-S. Hua, Y. Rui, J. Tang, T. Mei, and H.-J. Zhang. Correlative multi-label video annotation.
In ACM Multimedia, 2007.
[21] W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846?850, 1971.
[22] R. Redner and H. Walker. Mixture densities, maximum likelihood and the EM algorithm. SIAM Review,
26(2):195?239, 1984.
[23] M. D. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatio-temporal maximum average correlation
height filter for action recognition. In CVPR, 2008.
[24] S. Sadanand and J. J. Corso. Action Bank: A high-level representation of activity in video. In CVPR,
2012.
[25] F. Schroff, C. L. Zitnick, and S. Baker. Clustering videos by location. In BMVC, 2009.
[26] C. Sch?uldt, I. Laptev, and B. Caputo. Recognizing human actions: A local SVM approach. In ICPR,
2004.
[27] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 22(8):888?905, 2000.
[28] A. Vahdat and G. Mori. Handling uncertain tags in visual recognition. In ICCV, 2013.
[29] H. Valizadegan and R. Jin. Generalized maximum margin clustering and unsupervised kernel learning. In
NIPS, 2006.
[30] Y. Wang and L. Cao. Discovering latent clusters from geotagged beach images. In MMM, 2013.
[31] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In
CVPR, 2009.
[32] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In NIPS, 2004.
[33] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In
AAAI, 2005.
[34] W. Yang and G. Toderici. Discriminative tag learning on YouTube videos with latent sub-tags. In CVPR,
2011.
[35] W. Yang, Y. Wang, A. Vahdat, and G. Mori. Kernel latent SVM for visual recognition. In NIPS, 2012.
[36] C.-N. J. Yu and T. Joachims. Learning structural SVMs with latent variables. In ICML, 2009.
[37] K. Zhang, I. W. Tsang, and J. T. Kwok. Maximum margin clustering made practical. In ICML, 2007.
[38] B. Zhao, F. Wang, and C. Zhang. Efficient multiclass maximum margin clustering. In ICML, 2008.
9
| 5078 |@word briefly:2 km:6 tried:1 arti:1 initial:1 liu:2 wedding:7 efficacy:4 score:18 contains:2 denoting:1 outperforms:4 current:4 yet:1 assigning:1 written:1 partition:1 hofmann:1 enables:2 remove:1 update:1 grass:2 intelligence:2 instantiate:3 website:1 selected:2 discovering:1 plane:2 short:2 detecting:2 provides:2 appliance:1 location:3 codebook:1 zhang:5 unbounded:1 height:1 incorrect:1 consists:1 combine:1 fitting:3 fwr:3 boldfaced:1 introduce:3 nbc:1 valizadegan:2 tagging:1 examine:1 sdp:2 multi:6 food:4 toderici:2 little:1 farhadi:2 solver:2 considering:1 kuiper:1 project:5 discover:1 xx:2 moreover:2 notation:2 bike:4 baker:1 null:1 interpreted:1 minimizes:2 developed:1 unobserved:6 finding:1 temporal:2 mitigate:1 concave:1 nutshell:1 pur:6 demonstrates:2 hit:4 wrong:1 ramanan:1 positive:2 before:1 thereon:1 local:1 treat:2 vahdat:4 mach:1 marszalek:1 birthday:5 bird:1 studied:1 collect:2 mentioning:1 tian:1 range:1 acknowledgment:1 practical:1 enforces:2 kicking:1 practice:1 definite:1 implement:4 procedure:1 mei:1 projection:2 pre:1 word:1 consortium:1 svr:1 lady:3 unlabeled:1 interior:1 tsochantaridis:1 prentice:1 risk:7 glpk:1 wong:1 optimize:3 conventional:8 landing:2 equivalent:1 center:1 shi:1 fishing:1 flexibly:1 independently:1 convex:13 formulate:2 swinging:3 nating:1 sac:1 his:1 handle:5 variation:2 kernelize:1 controlling:2 suppose:2 user:1 caption:1 programming:3 us:3 trick:3 associate:2 recognition:7 particularly:1 updating:3 walking:3 skating:1 cut:3 labeled:2 observed:1 wang:5 capture:5 tsang:2 region:1 trade:2 fiscus:1 balanced:4 trained:2 solving:1 rewrite:1 segment:1 laptev:1 easily:1 joint:2 represented:6 various:5 train:3 jain:1 describe:5 doi:1 sc:7 labeling:11 repairing:1 ceremony:6 horse:2 neighborhood:1 richer:1 larger:2 solve:6 posed:1 dominating:2 relax:1 otherwise:3 cvpr:8 ability:2 statistic:1 transductive:1 jointly:3 descriptive:1 neufeld:1 caverlee:1 propose:4 maximal:1 frequent:2 cao:2 dpms:4 relevant:1 achieve:1 deformable:2 intuitive:1 description:3 validate:1 normalize:2 getting:1 cluster:70 optimum:2 extending:1 categorization:2 object:6 mmc:28 help:2 develop:5 andrew:2 dubes:1 mmm:1 pose:1 depending:1 nearest:2 school:1 progress:1 eq:24 c:1 involves:1 predicted:1 convention:2 snow:2 annotated:1 attribute:3 filter:1 arash:1 human:7 mcallester:1 require:2 government:2 feeding:5 assign:1 clustered:4 tighter:1 rong:1 sufficiently:1 considered:1 ground:8 hall:1 seed:1 algorithmic:1 predict:2 visualize:2 major:1 adopt:1 purpose:1 schroff:2 bag:1 label:12 grouped:1 largest:1 city:1 tool:1 gaussian:2 aim:1 zhou:2 eating:1 linguistic:1 focus:3 joachim:3 consistently:1 indicates:2 likelihood:1 baseline:4 inference:5 dependent:2 sadanand:1 accumulated:1 entire:1 eliminate:1 hidden:3 koller:1 reproduce:1 labelings:1 interested:1 semantics:2 compatibility:1 issue:1 classification:2 among:1 animal:8 art:1 constrained:1 noun:1 mutual:2 field:2 beach:5 ng:1 yu:2 unsupervised:12 icml:5 simplify:1 piecewise:1 employ:2 randomly:1 simultaneously:1 recognize:3 national:1 packer:1 kitchen:3 consisting:1 sandwich:1 detection:11 interest:2 evaluation:3 mixture:2 copyright:1 bundle:1 beforehand:1 conduct:3 tree:2 savarese:1 deformation:1 uncertain:1 instance:28 modeling:3 soft:1 dev:6 formulates:1 assignment:9 ordinary:2 introducing:1 recognizing:3 conducted:1 reported:2 dependency:1 spatiotemporal:1 person:1 geographical:1 density:1 siam:1 contract:1 off:3 enhance:1 together:2 again:1 aaai:1 recorded:1 woman:2 worse:1 tabrizi:2 zhao:2 american:1 li:2 potential:3 distribute:1 avahdat:1 includes:1 satisfy:1 explicitly:2 piece:3 vehicle:1 root:3 view:3 performed:1 lot:3 annotation:2 hoai:2 simon:1 waving:1 minimize:1 ass:1 ir:5 greg:1 accuracy:1 descriptor:2 disclaimer:1 awad:1 judgment:2 identify:1 fallen:1 marginally:1 correlative:2 worth:1 cybernetics:1 randomness:1 detector:4 definition:1 evaluates:1 corso:1 resultant:4 naturally:1 associated:3 mi:2 unstuck:1 hsu:2 couple:3 dataset:10 popular:1 recall:1 knowledge:2 car:3 improves:1 torso:1 organized:1 segmentation:2 redner:1 higher:2 supervised:13 day:3 zisserman:2 wei:1 rand:2 bmvc:2 formulation:3 evaluated:1 furthermore:2 correlation:4 aser:1 hand:5 web:2 replacing:1 multiscale:1 google:1 rodriguez:1 quality:1 believe:3 semisupervised:1 riding:1 effect:1 normalized:4 true:2 concept:1 verify:1 mob:1 contain:3 assigned:6 alternating:6 iteratively:2 semantic:1 deal:1 during:1 self:2 width:1 please:2 larson:1 criterion:2 generalized:2 complete:1 theoretic:1 performs:2 motion:2 meaning:1 image:5 novel:1 recently:1 common:2 superior:1 empirically:1 twist:1 overview:1 volume:1 extend:4 belong:1 association:1 refer:2 measurement:1 rd:1 ucf:6 similarity:2 maxh:1 etc:4 add:1 recent:2 perspective:1 optimizing:4 optimizes:1 diving:1 scenario:1 binary:6 success:3 baby:3 scoring:1 kit:5 employed:2 purity:1 semi:4 multiple:4 infer:3 ahmed:1 host:1 cccp:1 fraser:1 award:1 qi:2 prediction:1 calculates:1 regression:1 vision:3 metric:1 iteration:2 kernel:4 represent:2 normalization:2 chicken:2 background:1 separately:1 argmaxh:1 source:1 walker:1 sch:1 rest:1 file:2 comment:3 pooling:1 med:10 incorporates:1 effectiveness:1 jordan:1 call:2 integer:1 structural:4 presence:7 yang:4 constraining:1 enough:2 easy:1 variety:1 fit:1 restaurant:3 fm:5 idea:5 multiclass:2 golf:1 motivated:1 assist:1 penalty:1 action:27 gopalan:2 detailed:1 listed:1 indoors:3 hosting:1 extensively:1 svms:3 category:5 generate:2 fish:5 governmental:1 sign:2 wr:1 discrete:1 group:4 dancing:2 key:2 four:4 lan:1 yit:19 changing:1 hartigan:1 ht:10 utilize:1 graph:2 subgradient:3 year:4 sum:1 enforced:1 run:2 wood:3 parameterized:2 extends:3 reporting:1 family:2 sfu:1 endorsement:1 comparable:1 capturing:2 bound:3 gnu:1 followed:2 paced:2 quadratic:1 truck:1 activity:4 constraint:11 ri:5 tag:82 generates:1 aspect:2 dominated:1 min:5 kumar:2 department:1 according:3 popularized:1 icpr:1 fwt:6 beneficial:1 nmi:5 em:1 making:1 intuitively:2 restricted:1 iccv:1 mori:6 visualization:1 describing:3 slack:2 mechanism:1 ilp:3 singer:1 know:2 end:1 available:1 boxing:1 clapping:2 eight:1 kwok:2 hierarchical:2 spectral:3 enforce:2 shah:3 original:3 top:2 clustering:72 include:2 running:3 maintaining:1 hinge:1 music:1 exploit:6 build:2 chinese:3 implied:1 objective:4 parade:3 added:1 move:1 malik:1 strategy:1 dependence:1 gradient:1 kth:7 distance:1 separate:1 street:3 collected:2 considers:1 trivial:1 water:3 enforcing:2 assuming:2 modeled:1 index:1 reformulate:2 eres:1 balance:6 equivalently:1 setup:1 unfortunately:1 yir:3 nc:7 boy:3 stated:1 negative:1 append:1 design:1 implementation:3 policy:1 unknown:1 perform:2 contributed:1 upper:2 observation:1 datasets:7 markov:1 descent:6 jin:1 inferred:1 pair:1 dog:3 kl:1 sentence:2 learned:1 herein:1 boost:1 nip:6 address:2 bar:1 usually:2 pattern:2 exemplified:1 below:1 max:7 including:3 video:74 power:1 event:11 business:1 boat:4 advanced:1 representing:2 improve:2 reprint:1 concludes:1 coupled:1 extract:2 schmid:1 text:3 review:2 understanding:1 embedded:2 loss:1 discriminatively:1 grooming:2 men:2 interesting:1 consistent:2 principle:1 viewpoint:2 bank:5 eccv:2 supported:1 last:2 keeping:1 tire:1 bias:2 allow:1 neighbor:2 template:9 felzenszwalb:2 benefit:1 regard:1 calculated:2 rich:3 author:1 made:2 collection:2 party:5 transaction:2 approximate:3 cutting:2 implicitly:1 christmas:2 spatio:2 discriminative:3 xi:24 search:1 latent:90 iterative:1 decade:1 table:2 learn:5 ca:1 obtaining:1 schuurmans:3 caputo:1 complex:4 necessarily:1 zitnick:1 official:1 aistats:1 pk:1 scored:1 iarpa:2 guang:1 allowed:1 jogging:1 fair:2 child:2 xu:5 site:1 representative:1 board:5 hog3d:2 tong:1 trecvid:12 precision:1 sub:4 inferring:3 sewing:1 lie:1 r6:1 tang:1 down:1 removing:1 specific:3 showing:1 list:1 svm:14 evidence:1 incorporating:3 sankaranarayanan:2 exists:1 adding:1 effectively:1 lifting:1 notwithstanding:1 margin:40 rui:1 authorized:1 entropy:1 led:1 simply:2 appearance:2 explore:1 forming:1 visual:4 expressed:1 nserc:1 contained:1 sport:6 scalar:1 partially:1 hua:1 truth:6 extracted:1 acm:1 conditional:2 goal:3 formulated:1 sorted:1 flash:1 replace:1 absence:6 man:12 hard:1 fw:4 content:3 specifically:3 youtube:1 wt:12 total:5 called:1 multimedia:1 experimental:4 formally:2 select:1 support:4 people:2 crammer:1 violated:1 incorporate:2 evaluate:4 dance:2 handling:1 |
4,507 | 5,079 | Statistical analysis of coupled time series with Kernel
Cross-Spectral Density operators.
Michel Besserve
MPI for Intelligent Systems and MPI for Biological Cybernetics, T?ubingen, Germany
[email protected]
Nikos K. Logothetis
MPI for Biological Cybernetics, T?ubingen
[email protected]
Bernhard Sch?olkopf
MPI for Intelligent Systems, T?ubingen
[email protected]
Abstract
Many applications require the analysis of complex interactions between time series. These interactions can be non-linear and involve vector valued as well as
complex data structures such as graphs or strings. Here we provide a general
framework for the statistical analysis of these dependencies when random variables are sampled from stationary time-series of arbitrary objects. To achieve this
goal, we study the properties of the Kernel Cross-Spectral Density (KCSD) operator induced by positive definite kernels on arbitrary input domains. This framework enables us to develop an independence test between time series, as well as a
similarity measure to compare different types of coupling. The performance of our
test is compared to the HSIC test using i.i.d. assumptions, showing improvements
in terms of detection errors, as well as the suitability of this approach for testing
dependency in complex dynamical systems. This similarity measure enables us to
identify different types of interactions in electrophysiological neural time series.
1
Introduction
Complex dynamical systems can often be observed by monitoring time series of one or more variables. Finding and characterizing dependencies between several of these time series is key to understand the underlying mechanisms of these systems. This problem can be addressed easily in
linear systems [4], however non-linear systems are much more challenging. Whereas higher order
statistics can provide helpful tools in specific contexts [15], and have been extensively used in system identification, causal inference and blind source separation (see for example [10, 13, 5]); it is
difficult to derive a general approach with solid theoretical results accounting for a broad range of
interactions. Especially, studying the relationships between time series of arbitrary objects such as
texts or graphs within a general framework is largely unaddressed.
On the other hand, the dependency between independent identically distributed (i.i.d.) samples
of arbitrary objects can be studied elegantly in the framework of positive definite kernels [19]. It
relies on defining cross-covariance operators between variables mapped implicitly to Reproducing
Kernel Hilbert Spaces (RKHS) [7]. It has been shown that when using a characteristic kernel for
the mapping [9], the properties of RKHS operators are related to statistical independence between
input variables and allow testing for it in a principled way with the Hilbert-Schmidt Independence
Criterion (HSIC) test [11]. However, the suitability of this test relies heavily on the assumption
that i.i.d. samples of random variables are used. This assumption is obviously violated in any nontrivial setting involving time series, and as a consequence trying to use HSIC in this context can
lead to incorrect conclusions. Zhang et al. established a framework in the context of Markov chains
1
[22], showing that a structured HSIC test still provides good asymptotic properties for absolutely
regular processes. However, this methodology has not been assessed extensively in empirical time
series. Moreover, beyond the detection of interactions, it is important to be able to characterize the
nature of the coupling between time series. It was recently suggested that generalizing the concept
of cross-spectral density to Reproducible Kernel Hilbert Spaces (RKHS) could help formulate nonlinear dependency measures for time series [2]. However, no statistical assessment of this measure
has been established. In this paper, after recalling the concept of kernel spectral density operator,
we characterize its statistical properties. In particular, we define independence tests based on this
concept as well as a similarity measure to compare different types of couplings. We use these tests
in section 4 to compute the statistical dependencies between simulated time series of various types
of objects, as well as recordings of neural activity in the visual cortex of non-human primates. We
show that our technique reliably detects complex interactions and provides a characterization of
these interactions in the frequency domain.
2
Background and notations
Random variables in Reproducing Kernel Hilbert Spaces
Let X1 and X2 be two (possibly non vectorial) input domains. Let k1 (., .) : X1 ? X1 ? C and
k2 (., .) : X2 ? X2 ? C be two positive definite kernels, associated to two separable Hilbert spaces
of functions, H1 and H2 respectively. For i ? {1, 2}, they
a canonical mapping from x ? Xi
define
to x = ki (., x) ? Hi , such that ?f ? Hi , f (x) = f, x H (see [19] for more details). In the
i
same way, this mapping can be extended to random variables, so that the random variable Xi ? Xi
is mapped to the random element Xi ? Hi . Statistical objects extending the classical mean and
covariance to random variables in the RKHS are defined as follows:
? the Mean Element (see [1, 3]): ?i = E [Xi ] ,
? the Cross-covariance operator (see [6]): Cij = Cov [Xi , Xj ] = E[Xi ? X?j ] ? ?i ? ??j ,
where we
usethe tensor product notation f ? g ? to represent the rank one operator defined by
f ? g ? = g, . f (following [3]). As a consequence, the cross-covariance can be seen as an operator
in L(Hj , Hi ), the Hilbert space of linear Hilbert-Schmidt operators from Hj to Hi (isomorphic to
Hi ? Hj? ). Interestingly, the link between Cij and covariance in the input domains is given by the
Hilbert-Schmidt scalar product
Cij , fi ? fj? HS = Cov [fi (Xi ), fj (Xj )] , ?(fi , fj ) ? Hi ? Hj
Moreover, the Hilbert-Schmidt norm of the operator in this space has been proved to be a measure of
independence between two random variables, whenever kernels are characteristic [11]. Extension of
this result has been provided in [22] for Markov chains. If the time series are assumed to be k-order
Markovian, then results of the classical HSIC can be generalized for a structured HSIC using universal kernels based on the state vectors (x1 (t), . . . , x1 (t + k), x2 (t), . . . , x2 (t + k)). The statistical
performance of this methodology has not been studied extensively, in particular its sensitivity to the
dimension of the state vector. The following sections propose an alternative methodology.
Kernel Cross-Spectral Density operator
Consider a bivariate discrete time random process on X1 ? X2 : {(X1 (t), X2 (t))}t?Z . We assume
stationarity of the process and thus use the following translation invariant notations for the mean
elements and cross-covariance operators:
EXi (t) = ?i ,
Cov [Xi (t + ? ), Xj (t)] = Cij (? )
The cross-spectral density operator was introduced for stationary signals in [2] based on second
order cumulants. Under mild assumptions, it is a Hilbert-Schmidt operator defined for all normalized
frequencies ? ? [0 ; 1] as:
X
X
S12 (?) =
C12 (k) exp(?k2??) =
C12 (k)z ?k , for z = e2?i? .
k?Z
k?Z
2
This object summarizes all the cross-spectral properties between the families of processes
{f (X1 )}f ?H1 and {g(X2 )}g?H2 in the sense that the cross-spectrum between f (X1 ) and g(X2 )
f,g
(?) = f, S12 g . We therefore refer to this object as the Kernel Cross-Spectral
is given by S12
Density operator (KCSD).
3
Statistical properties of KCSD
Measuring independence with the KCSD
One interesting characteristic of the KCSD is given by the following theorem [2]:
Theorem 1. Assume the kernels k1 and k2 are characteristic [9]. The processes X1 and X2 are
pairwise
independent
(i.e. for all integers t and t?, X1 (t) and X2 (t0 ) are independent), if and only
if
S12 (?)
HS = 0, ?? ? [0 , 1].
While this theorem states that KCSD can be used to test pairwise independence between time series,
it does not imply independence between arbitrary sets of random variables taken from each time
series in general. However, if the joint probability distribution of the time series is encoded by a
Directed Acyclic Graph (DAG), the following Theorem shows that independence in this broader
sense is achieved under mild assumptions.
Proposition 2. If the joint probability distribution of time series is encoded by a DAG with no
confounder under the Markov property and faithfulness assumption, pairwise independence between
time series implies the mutual independence relationship {X1 (t)}t?Z ?
? {X2 (t)}t?Z .
Proof. The proof uses the fact that the faithfulness and Markov property assumptions provide an
equivalence between the independence of two sets of random variables and the d-separation of the
corresponding sets of nodes in the DAG (see [17]). We start by assuming pairwise independence
between the time series.
For arbitrary times t and t0 , assume the DAG contains an arrow linking the nodes X1 (t) and X2 (t0 ).
This is an unblocked path linking this two nodes; thus they are not d-separated. As a consequence of
faithfulness, X1 (t) and X2 (t0 ) are not independent. Since this contradicts our initial assumptions,
there cannot exist any arrow between X1 (t) and X2 (t0 ).
Since this holds for all t and t0 , there is no path linking the nodes of each time series and we have
{X1 (t)}t?Z ?
? {X2 (t)}t?Z according to the Markov property (any joint probability distribution on
the nodes will factorize in two terms, one for each time series).
As a consequence, the use of KCSD to test for independence is justified under the widely used
faithfulness and Markov assumptions of graphical models. As a comparison, the structured HSIC
proposed in [22] is theoretically able to capture all dependencies within the range of k samples by
assuming k-order Markovian time series.
Fourth order kernel cumulant operator
Statistical properties of KCSD require assumptions regarding the higher order statistics of the time
series. Analogously to covariance, higher order statistics can be generalized as operators in (tensor
products of) RKHSs. An important example in our setting is the joint quadricumulant (4th order
cumulant) (see [4]). We skip the general expression of this cumulant to focus on its simplified form
for four centered scalar random variables:
?(X1 , X2 , X3 , X4 ) = E[X1 X2 X3 X4 ] ? E[X1 X2 ]E[X3 X4 ] ? E[X1 X3 ]E[X2 X4 ]
? E[X1 X4 ]E[X2 X3 ] (1)
This object can be generalized to the case random variables mapped in two RKHSs. The quadricu?
?
mulant operator K1234 is a linear operator
in the Hilbert space
L(H1 ? H2 , H1 ? H2 ), such that
?
?
?(f1 (X1 ),f2 (X2 ),f3 (X3 ),f4 (X4 )) = f1 ?f2 ,K1234 f3 ?f4 , for arbitrary elements fi . The properties of this operator will be useful in the next sections due to the following lemma.
Lemma 3. [Property of the tensor quadricumulant] Let Xc1 , Xc3 be centered random elements in
the Hilbert space H1 and Xc2 , Xc4 centered random elements in H2 (the centered random element is
defined by Xci = Xj ? ?j ), then
E Xc1 , Xc3 H1 Xc2 , Xc4 H2 = Tr K1234 + C 1,2 , C 3,4 + Tr C 1,3 Tr C 2,4 + C 1,4 , C 3,2
3
In the case of two jointly stationary time series, we define the translation invariant quadricumulant
between the two stationary time series as:
K12 (?1 , ?2 , ?3 ) = K1234 (X1 (t + ?1 ), X2 (t + ?2 ), X1 (t + ?3 ), X2 (t))
Estimation with the Kernel Periodogram
In the following, we address the problem of estimating the properties of cross-spectral density operators from finite samples. The idea for doing this analytically is to select samples from a time-series
with a tapering window function w : R 7? R with a support included in [0, 1]. By scaling this
window according to wT (k) = w(k/T ), and multiplying it with the time series, T samples of the
sequence can be selected. The windowed periodogram estimate of the KCSD operator for T successive samples of the time series is
1
c
c
?
PT12 (?) =
2 FT [X1 ](?)?FT [X2 ](?) ,
T kwk
? 1
2
with Xci (k) = Xi (k) ? ?i and kwk =
w2 (t)dt
0
PT
where FT [Xc1 ] = k=1wT (k)(Xc1 (k))z ?k , for z = e2?i? , is the windowed Fourier transform of
the delayed time series in the RKHS. Properties of the windowed Fourier transform are related to
the regularity of the tapering window. In particular, we will chose a tapering window of bounded
variation. In such a case, the following lemma holds (see supplementary material for the proof).
Lemma 4. [A property of
variation functions]
Pbounded
P+?Let w be a bounded function of bounded
+?
variation then for all k, t=?? wT (t + k)w(t) ? t=?? wT (t)2 ? C|k|
Using this assumption, the above periodogram estimate is asymptotically unbiased as shown in the
following theorem
P
Theorem 5. Let w be a bounded function of bounded variation,
if k?Z |k|kC12 (k)kHS < +?,
P
P
k?Z |k| Tr(Cii (k)) < +? and
(k,i,j)?Z3 Tr [K12 (k, i, j)] < +?,
then
lim E PT12 (?) = S12 (?), ? 6? 0
(mod 1/2)
Proof. By definition,
P
P
1
c
?k
c
?n ?
?
PT12 (z) = T kwk
2
k?Z wT (k)X1 (k)z
n?Z wT (n)X2 (n)z
P
P
1
n?k
wT (k)wT (n)Xc1 (k)?Xc2 (n)?
= T kwk
2
k?Z
n?Z z
P
P
1
??
c
c
?
= T kwk
2
??Z z
n?Z wT (n + ?)wT (n)X1 (n + ?)?X2 (n) , using ? = k ? n.
T ?+?
Thus using Lemma 4,
P
P
1
??
( n?Z wT (n)2 + O(|?|))C12 (?)
E PT12 (z) = T kwk
2
??Z z
P
1
= kwk
2(
n?Z
wT (n)2 P
) ??Z
T
P
z ?? C12 (?)+ T1 O( ??Z |?|
C12 (?)
HS )
?
T ?+?
S12 .
However, the squared Hilbert-Schmidt norm of PT12 (?) is an asymptotically biased estimator of the
population KCSD squared norm according to the following theorem.
Theorem 6. Under the assumptions of Theorem 5, for ? 6? 0 (mod 1/2)
2
2
lim E
PT12 (?)
=
S12 (?)
+ Tr(S11 (?)) Tr(S22 (?))
T ?+?
HS
HS
The proof of Theorem 5 is based on the decomposition in Lemma 3 and is provided in supplementary
information.
This estimate requires specific bias estimation techniques to develop an independence test, we will
call it the biased estimate of the KCSD squared norm. Having the KCSD defined in an Hilbert space
also enables to define similarity between two KCSD operators, so that it is possible to compare
quantitatively whether different dynamical systems have similar couplings. The following theorem
shows how periodograms enable to estimate the scalar product between two KCSD operators, which
reflects their similarity.
4
Theorem 7. Assume assumptions of Theorem 5 hold for two independent samples of bivariate
time series{(X1 (t), X2 (t))}t=...,?1,0,1,... and {(X3 (t), X4 (t))}t=...,?1,0,1,... , mapped with the same
couple of reproducing kernels.
Then lim E PT12 (?), PT34 (?) HS = S12 (?), S34 (?) HS , ? 6? 0 (mod 1/2)
T ?+?
The proof of Theorem 7 is similar to the one of Theorem 6 provided as supplemental information.
Interestingly, this estimate of the scalar product between KCSD operators is unbiased. This comes
from the assumption that the two bivariate series are independent. This provides a new opportunity
to estimate the Hilbert-Schmidt norm as well, in case two independent samples of the same bivariate
series are available.
Corollary 8. Assume assumptions of Theorem 5 hold for the bivariate time series
? 1 (t), X
? 2 (t))}t?Z an independent copy of the same time series,
{(X1 (t), X2 (t))}t?Z and assume {(X
T
? T (?), respectively.
providing the periodogram estimates P12 (?) and P
12
Then
2
? T (?)
=
S12 (?)
HS , ? 6? 0
lim E PT12 (?), P
12
HS
T ?+?
(mod 1/2)
In many experimental settings, such as in neuroscience, it is possible to measure the same time series
in several independent trials. In such a case, corollary 8 states that estimating the Hilbert-Schmidt
norm of the KCSD without bias is possible using two intependent trials. We will call this estimate
the unbiased estimate of the KCSD squared norm.
These estimate can be computed efficiently for T equispaced frequency samples using the fast
Fourier transform of the centered kernel matrices of the two time series. In general, the choice
of the kernel is a trade-off between the capacity to capture complex dependencies (a characteristic kernel being better in this respect), and the convergence rate of the estimate (simpler kernels related to lower order statistics usually require less samples). Related theoretical analysis can be found in [8, 12]. Unless otherwise stated, the Gaussian RBF kernel with bandwidth
2
parameter ?, k(x, y) = exp(kx ? yk /2? 2 ), will be used as a characteristic kernel for vector spaces. Let Kij denote the kernel matrix between the i-th and j-th time series (such that
(Kij )k,l = k(xi (k), xj (l))), W the windowing matrix (such that (W)k,l = wT (k)wT (l)) and
M be the centering matrix M = I ? 1T 1TT /T , then we can define the windowed centered kernel
? ij = (MKij M) ? W. Defining the Discrete Fourier Transform matrix F, such that
matrices K
?
(F)k,l = exp(?i2?kl/T )/ T , the estimated scalar product is
PT12 , PT34
?=(0,1,...,(T ?1))/T
= kwk
?4
? 13 F?1 ? diag F?1 K
? 24 F ,
diag FK
which can be efficiently computed using the Fast Fourier Transform (? is the Hadamard product).
The biased and unbiased squared norm estimates can be trivially retrieved from the above expression.
Shuffling independence tests
According to Theorem 1, pairwise independence between time series requires the cross-spectral
density operator to be zero for all frequencies. We can thus test independence by testing whether
the Hilbert-Schmidt norm of the operator vanishes for each frequency. We rely on Theorem 6 and
Corollary 8 to compute biased and unbiased estimates of this norm. To achieve this, we generate
a distribution of the Hilbert-Schmidt norm statistics under the null hypothesis by cutting the time
interval in non-overlapping blocks and matching the blocks of each time series in pairs at random.
Due to the central limit theorem, for a sufficiently large number of time windows, the empirical
average of the statistics approaches a Gaussian distribution. We thus test whether the empirical
mean differs from the one under the null distribution using a t-statistic. To prevent false positive
resulting from multiple hypothesis testing, we control the Family-wise Error Rate (FWER) of the
tests performed for each frequency. Following [16], we estimate a global maximum distribution on
the family of t-statistics across frequencies under the null hypothesis, and use the percentile of this
distribution to assess the significance of the original t-statistics.
5
Squared norm estimate: linear kernel
0.3
4
0.2
0
0.1
0.05
0
0
?2
?4
0
0.5
1
time (s)
?0.1
1.5
0
5
number of dependencies detected (%)
number of dependencies detected (%)
Detection probability for biased kcsd
1
linear
rbf ?=.1
0.8
.2
.63
0.6
1.5
3.9
0.4
10
0.2
0
2
10
number of samples
3
10
10
15
20
frequency (Hz)
25
30
Detection probability for unbiased kcsd
1
?0.05
0
5
10
15
20
frequency (Hz)
0.6
60
0.4
40
0.2
2
10
20
number of samples
30
type I (C=0)
80
80
25
100
100
0.8
0
biased
unbiased
0.1
error rate (%)
2
Squared norm estimate: RBF kernel
0.15
biased
unbiased
type II (C=.4)
type II (C=2)
60
40
20
3
10
0
block hsic
hsic
linear kcsd
kcsd
Figure 1: Results for the phase-amplitude coupling system. Top-left: example time course. Topmiddle: estimate of the KCSD squared norm with a linear kernel. Top-right: estimate of the KCSD
squared norm with an RBF kernel. Bottom-left: performance of the biased kcsd test as a function of
number of samples. Bottom-middle: performance of the unbiased kcsd test as a function of number
of samples. Bottom-right: Rate of type I and type II errors for several independence tests.
4
Experiments
In the following, we validate the performance of our test, called kcsd, on several datasets in the
biased and unbiased case. There is no general time series analysis tool in the literature to compare
with our approach on all these datasets. So our main source of comparison will be the HSIC test of
independence (assuming data is i.i.d.). This enables us, to compare both approaches using the same
kernels. For vector data, one can compare the performance of our approach with a linear dependency
measure: we do this by implementing our test using a linear kernel (instead of an RBF kernel), and
we call it linear kscd. Finally, we use the alternative approach of structured HSIC [22] by cutting
the time series in time windows (using the same approach as our independence test) and considering
each of them as a single multivariate sample. This will be called block hsic. The bandwidth of the
HSIC methods is chosen proportional to the median norm of the sample points in the vector space.
The p-value for all independence tests will be set to 5%.
Phase amplitude coupling
We first simulate a non-linear dependency between two time series by generating two oscillations at
frequencies f1 and f2 , and introducing a modulation of the amplitude of the second oscillation by
the phase of the first one. This is achieved using the following discrete time equations:
?1 (k + 1) = ?1 (k) + .11 (k) + 2?f1 Ts
x1 (k) =
cos(?1 (k))
?2 (k + 1) = ?2 (k) + .12 (k) + 2?f2 Ts
x2 (k) = (2 + C sin ?1 (k)) cos(?2 (k))
Where the i are i.i.d normal. A simulation with f1 = 4Hz and f2 = 20Hz for a sampling frequency 1/Ts =100Hz is plotted on Figure 1 (top-left panel). For the parameters of the periodogram,
we used a window length of 50 samples (.5 s). We used a Gaussian RBF kernel to compute nonlinear dependencies between the two time series after standardizing each of them (divide them by
their standard deviation). The top-middle and top-right panels of Figure 1 plot the mean and standard errors of the estimate of the squared Hilbert-Schmidt norm for this system (for C = .1) for a
linear and a Gaussian RBF kernel (with ? = 1) respectively. The bias of the first estimate appears
clearly in both cases at the two power picks of the signals for the biased estimate. In the second
(unbiased) estimate, the spectrum exhibits a zero mean for all but one peak (at 4Hz for the RBF
kernel), which corresponds to the expected frequency of non-linear interaction between the time
series. The observed negative values are also a direct consequence of the unbiased property of our
estimate (Corollary 8). The influence of the bandwidth parameter of the kernel was studied in the
case of weakly coupled time series (C = .4 ). The bottom left and middle panels of Figure 1 show
6
.1
.2
.1
1
.7
.2
.7
2
.2
.4
3
.1
1
.01
.1
.89
.7
3
.3
.1
biased
unbiased
15
10
5
0
0.1
0.2
0.5
1
2
frequency (Hz)
5
10
20
1
2
time (s)
3
4
100
error rate (%)
20
0
.1
Transition probabilities
KCSD norm estimate
0.5
0
?0.5
.6
2
.5
state 3
state 2
state 1
type I error
type II error
80
60
40
20
0
block hsic
hsic
kcsd
Figure 2: Markov chain dynamical system. Upper left: Markov transition probabilities, fluctuating
between the values indicated in both graphs. Upper right: example of simulated time series. Bottom
left: the biased and unbiased KCSD norm estimates in the frequency domain. Bottom right: type I
and type II errors for hsic and kcsd tests
the influence of this parameter on the number of samples required to actually reject the null hypothesis and detect the dependency for biased and unbiased estimates respectively. It was observed that
choosing an hyper-parameter close to the standard deviation of the signal (here 1.5) was an optimal
strategy, and that the test relying on the unbiased estimate outperformed the biased estimate. We
thus used the unbiased estimate in our subsequent analysis. The coupling parameter C was further
varied to test the performance of independence tests both in case the null hypothesis of independence is true (C=0), and when it should be rejected (C = .4 for weak coupling, C = 2 for strong
coupling). These two settings enable to quantify the type I and type II error of the tests, respectively.
The bottom-right panel of Figure 1 reports these errors for several independence tests. Showing the
superiority of our method especially for type II errors. In particular, methods based on HSIC fail to
detect weak dependencies in the time series.
Time varying Markov chain
We now illustrate the use of our test in an hybrid setting. We generate a symbolic time series x2
using the alphabet S = [1, 2, 3], controlled by a scalar time series x1 . The coupling is achieved by
modulating across time the transition probabilities of the Markov transition matrix generating the
symbolic time series x2 using the current value of the scalar time series x1 . This model is described
by the following equations with f1 = 1Hz.
(
?1 (k + 1)
= ?1 (k) + .11 (k) + 2?f1 Ts
x1 (k + 1)
=
sin(?1 (k + 1))
p(x2 (k + 1) = Si |x2 (k) = Sj ) =
Mij + ?Mij x1 (k)
Since x1 is bounded between -1 and 1, the Markov transition matrix fluctuates across time between
two models represented Figure 2 (top-left panel). A model without these fluctuations (?M = 0)
was simulated as well to measure type I error. The time course of such an hybrid system is illustrated
on the top-right panel of the same figure. In order to measure the dependency between these two
time series, we use a k-spectrum kernel [14] for x2 and a RBF kernel for x1 . For the k-spectrum
kernel, we use k=2 (using k=1, i.e. counting occurrences of single symbols was less efficient) and
we computed the kernel between words of 3 successive symbols of the time series. We used an RBF
kernel with ? = 1, decimated the signals by a factor 2 and signals were cut in time windows of 100
samples. The biased and unbiased estimates of the KCSD norm are represented at the bottom-left
of Figure 2 and show a clear peak at the modulating frequency (1Hz). The independence test results
shown at the bottom-right of Figure 2 illustrate again the superiority of KCSD for type II error,
whereas type I error stays in an acceptable range.
7
Figure 3: Left: Experimental setup of LFP recordings in anesthetized monkey during visual stimulation with a movie. Right: Proportion of detected dependencies for the unbiased kcsd test of
interactions between Gamma band and wide band LFP for different kernels.
Neural data: local field potentials from monkey visual cortex
We analyzed dependencies between local field potential (LFP) time series recorded in the primary
visual cortex of one anesthetized monkey during visual stimulation by a commercial movie (see
Figure 3 for a scheme of the experiment). LFP activity reflects the non-linear interplay between a
large variety of underlying mechanisms. Here we investigate this interplay by extracting LFP activity
in two frequency bands within the same electrode and quantify the non-linear interactions between
them with our approach. LFPs were filtered into two frequency bands: 1/ a wide band ranging from
1 to 100Hz which contains a rich variety of rhythms and 2/ a high gamma band ranging from 60 to
100Hz which as been shown to play a role in the processing of visual information.
Both of these time series were sampled at 1000Hz. Using non-overlapping time windows of 1s
points, we computed the Hilbert-Schmidt norm of the KCSD operator between gamma and large
band time series originating from the same electrode. We performed statistical testing for all frequencies between 1 and 500Hz (using a Fourier transform on 2048 points). The results of the test
averaged over all recording sites is plotted on Figure 3. We observe a highly reliable detection of
interactions in the gamma band, using either a linear or non-linear kernel. This is due to the fact
that the Gamma band LFP is a filtered version of the wide band LFP, making these signals highly
correlated in the Gamma band. However, in addition to this obvious linear dependency, we observe
significant interactions in the lowest frequencies (0.5-2Hz) which can not be explained by linear interaction (and is thus not detected by the linear kernel). This characteristic illustrates the non-linear
interaction between the high frequency gamma rhythm and other lower frequencies of the brain electrical activity, which has been reported in other studies [21]. This also shows the interpretability of
our approach as a test of non-linear dependency in the frequency domain.
5
Conclusion
An independence test for time series based on the concept of Kernel Cross Spectral Density estimation was introduced in this paper. It generalizes the linear approach based on the Fourier transform
in several respects. First, it allows quantification of non-linear interactions for time series living
in vector spaces. Moreover, it can measure dependencies between more complex objects, including sequences in an arbitrary alphabet, or graphs, as long as an appropriate positive definite kernel
can be defined in the space of each time series. This paper provides asymptotic properties of the
KCSD estimates, as well as an efficient approach to compute them on real data. The space of KCSD
operators constitutes a very general framework to analyze dependencies in multivariate and highly
structured dynamical systems. Following [13, 18], our independence test can further be combined
to recent developments in kernel time series prediction techniques [20] to define general and reliable
multivariate causal inference techniques.
Acknowledgments. MB is grateful to Dominik Janzing for fruitful discussions and advice.
8
References
[1] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics.
Kluwer Academic Boston, 2004.
[2] M. Besserve, D. Janzing, N. Logothetis, and B. Sch?olkopf. Finding dependencies between frequencies
with the kernel cross-spectral density. In IEEE International Conference on Acoustics, Speech and Signal
Processing, pages 2080?2083, 2011.
[3] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component analysis.
Machine Learning, 66(2-3):259?294, 2007.
[4] D. Brillinger. Time series: data analysis and theory. Holt, Rinehart, and Winston, New York, 1974.
[5] J.-F. Cardoso. High-order contrasts for independent component analysis. Neural computation, 11(1):157?
192, 1999.
[6] K. Fukumizu, F. Bach, and A. Gretton. Statistical convergence of kernel CCA. In Advances in Neural
Information Processing Systems 18, pages 387?394, 2006.
[7] K. Fukumizu, F. Bach, and M. Jordan. Dimensionality reduction for supervised learning with reproducing
kernel Hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004.
[8] K. Fukumizu, A. Gretton, G. R. Lanckriet, B. Sch?olkopf, and B. K. Sriperumbudur. Kernel choice and
classifiability for RKHS embeddings of probability distributions. In Advances in Neural Information
Processing Systems 21, pages 1750?1758, 2009.
[9] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel Measures of Conditional Dependence. In
Advances in Neural Information Processing Systems 20, pages 489?496, 2008.
[10] G. B. Giannakis and J. M. Mendel. Identification of nonminimum phase systems using higher order
statistics. Acoustics, Speech and Signal Processing, IEEE Transactions on, 37(3):360?377, 1989.
[11] A. Gretton, K. Fukumizu, C. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In Advances in Neural Information Processing Systems 20, pages 585?592. 2008.
[12] A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B. K. Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information
Processing Systems 25, pages 1214?1222, 2012.
[13] A. Hyv?arinen, S. Shimizu, and P. O. Hoyer. Causal modelling combining instantaneous and lagged effects:
an identifiable model based on non-gaussianity. In Proceedings of the 25th International Conference on
Machine Learning, pages 424?431. ACM, 2008.
[14] C. Leslie, E. Eskin, and W. Noble. The spectrum kernel: a string kernel for SVM protein classification.
In Pac Symp Biocomput., 2002.
[15] C. Nikias and A. Petropulu. Higher-Order Spectra Analysis - A Non-linear Signal Processing Framework.
Prentice-Hall PTR, Englewood Cliffs, NJ, 1993.
[16] D. Pantazis, T. Nichols, S. Baillet, and R. Leahy. A comparison of random field theory and permutation
methods for the statistical analysis of MEG data. NeuroImage, 25:383 ? 394, 2005.
[17] J. Pearl. Causality - Models, Reasoning, and Inference. Cambridge University Press, Cambridge, UK,
2000.
[18] J. Peters, D. Janzing, and B. Sch?olkopf. Causal inference on time series using structural equation models.
In Advances in Neural Information Processing Systems 26, 2013.
[19] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[20] V. Sindhwani, H. Q. Minh, and A. C. Lozano. Scalable matrix-valued kernel learning for high-dimensional
nonlinear multivariate regression and granger causality. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, 2013.
[21] K. Whittingstall and N. K. Logothetis. Frequency-band coupling in surface EEG reflects spiking activity
in monkey visual cortex. Neuron, 64:281?9, 2009.
[22] X. Zhang, L. Song, A. Gretton, and A. Smola. Kernel Measures of Independence for Non-IID Data. In
Advances in Neural Information Processing Systems 21, pages 1937?1944, 2009.
9
| 5079 |@word h:9 mild:2 trial:2 version:1 middle:3 norm:21 proportion:1 hyv:1 simulation:1 accounting:1 covariance:7 decomposition:1 pick:1 tr:7 solid:1 reduction:1 tapering:3 initial:1 series:63 contains:2 rkhs:6 interestingly:2 current:1 si:1 subsequent:1 enables:4 reproducible:1 plot:1 stationary:4 intelligence:1 selected:1 filtered:2 eskin:1 provides:4 characterization:1 node:5 successive:2 mendel:1 simpler:1 zhang:2 windowed:4 direct:1 incorrect:1 symp:1 classifiability:1 theoretically:1 pairwise:5 expected:1 periodograms:1 mpg:3 brain:1 detects:1 relying:1 window:9 considering:1 provided:3 estimating:2 underlying:2 moreover:3 notation:3 bounded:6 panel:6 null:5 lowest:1 string:2 monkey:4 supplemental:1 finding:2 brillinger:1 nj:1 besserve:3 k2:3 berlinet:1 control:1 uk:1 superiority:2 positive:5 t1:1 local:2 limit:1 consequence:5 mach:1 cliff:1 path:2 modulation:1 fluctuation:1 chose:1 studied:3 equivalence:1 challenging:1 co:2 confounder:1 range:3 averaged:1 directed:1 acknowledgment:1 testing:5 lfp:7 block:5 definite:4 differs:1 x3:7 pontil:1 empirical:3 universal:1 reject:1 matching:1 word:1 regular:1 holt:1 protein:1 symbolic:2 cannot:1 close:1 operator:29 prentice:1 context:3 influence:2 zwald:1 fruitful:1 xci:2 formulate:1 rinehart:1 estimator:1 leahy:1 population:1 variation:4 hsic:17 pt:1 logothetis:4 heavily:1 commercial:1 play:1 us:1 equispaced:1 hypothesis:5 lanckriet:1 element:7 cut:1 observed:3 ft:3 bottom:9 role:1 electrical:1 capture:2 sun:1 trade:1 yk:1 principled:1 vanishes:1 weakly:1 grateful:1 f2:5 easily:1 exi:1 joint:4 various:1 represented:2 alphabet:2 separated:1 fast:2 detected:4 artificial:1 hyper:1 choosing:1 encoded:2 widely:1 valued:2 supplementary:2 fluctuates:1 otherwise:1 statistic:11 cov:3 lfps:1 jointly:1 transform:7 obviously:1 interplay:2 sequence:2 propose:1 interaction:15 product:7 mb:1 hadamard:1 combining:1 achieve:2 validate:1 olkopf:7 convergence:2 regularity:1 electrode:2 extending:1 strathmann:1 generating:2 object:9 help:1 coupling:11 develop:2 derive:1 illustrate:2 ij:1 strong:1 skip:1 implies:1 come:1 quantify:2 f4:2 centered:6 human:1 enable:2 material:1 implementing:1 require:3 arinen:1 f1:7 suitability:2 proposition:1 biological:2 extension:1 hold:4 sufficiently:1 hall:1 normal:1 exp:3 mapping:3 estimation:3 outperformed:1 s12:9 teo:1 modulating:2 tool:2 reflects:3 fukumizu:6 mit:1 clearly:1 gaussian:4 hj:4 varying:1 broader:1 corollary:4 focus:1 mkij:1 improvement:1 rank:1 modelling:1 contrast:1 sense:2 detect:2 helpful:1 inference:4 originating:1 germany:1 classification:1 development:1 mutual:1 field:3 f3:2 having:1 sampling:1 x4:7 broad:1 s22:1 constitutes:1 noble:1 report:1 intelligent:2 quantitatively:1 gamma:7 delayed:1 phase:4 recalling:1 detection:5 stationarity:1 englewood:1 investigate:1 highly:3 analyzed:1 chain:4 unless:1 divide:1 re:1 plotted:2 causal:4 theoretical:2 kij:2 markovian:2 cumulants:1 measuring:1 leslie:1 introducing:1 deviation:2 characterize:2 reported:1 dependency:23 combined:1 density:11 peak:2 sensitivity:1 international:2 stay:1 off:1 analogously:1 squared:10 central:1 again:1 recorded:1 possibly:1 michel:2 potential:2 de:3 standardizing:1 c12:5 gaussianity:1 blanchard:1 blind:1 performed:2 h1:6 doing:1 kwk:8 analyze:1 start:1 ass:1 largely:1 characteristic:7 efficiently:2 identify:1 weak:2 identification:2 iid:1 monitoring:1 multiplying:1 cybernetics:2 janzing:3 whenever:1 definition:1 centering:1 sriperumbudur:2 frequency:24 obvious:1 e2:2 associated:1 proof:6 couple:1 sampled:2 proved:1 lim:4 dimensionality:1 electrophysiological:1 hilbert:22 amplitude:3 actually:1 appears:1 higher:5 dt:1 supervised:1 pantazis:1 methodology:3 rejected:1 smola:3 hand:1 nonlinear:3 assessment:1 overlapping:2 indicated:1 unblocked:1 effect:1 concept:4 normalized:1 unbiased:19 true:1 nichols:1 analytically:1 lozano:1 decimated:1 i2:1 illustrated:1 sin:2 during:2 percentile:1 mpi:4 rhythm:2 fwer:1 criterion:1 generalized:3 trying:1 ptr:1 tt:1 p12:1 fj:3 reasoning:1 ranging:2 wise:1 instantaneous:1 recently:1 fi:4 stimulation:2 spiking:1 linking:3 kluwer:1 refer:1 significant:1 cambridge:3 dag:4 shuffling:1 fk:1 trivially:1 similarity:5 cortex:4 surface:1 multivariate:4 recent:1 retrieved:1 ubingen:3 s11:1 nikias:1 seen:1 nikos:2 cii:1 signal:9 ii:8 living:1 windowing:1 multiple:1 gretton:6 baillet:1 academic:1 cross:16 long:1 bach:2 controlled:1 prediction:1 involving:1 scalable:1 regression:1 nonminimum:1 kernel:62 represent:1 sejdinovic:1 achieved:3 justified:1 whereas:2 background:1 addition:1 addressed:1 interval:1 median:1 source:2 sch:7 w2:1 biased:15 induced:1 recording:3 hz:14 balakrishnan:1 unaddressed:1 mod:4 jordan:1 integer:1 call:3 extracting:1 structural:1 counting:1 identically:1 embeddings:1 variety:2 independence:30 xj:5 bandwidth:3 regarding:1 idea:1 t0:6 whether:3 expression:2 song:2 peter:1 speech:2 york:1 useful:1 clear:1 involve:1 cardoso:1 extensively:3 band:12 generate:2 kc12:1 exist:1 canonical:1 neuroscience:1 estimated:1 discrete:3 key:1 four:1 prevent:1 graph:5 asymptotically:2 fourth:1 uncertainty:1 family:3 separation:2 oscillation:2 summarizes:1 scaling:1 acceptable:1 ki:1 hi:7 cca:1 winston:1 identifiable:1 activity:5 nontrivial:1 vectorial:1 x2:35 bousquet:1 fourier:7 simulate:1 separable:1 structured:5 according:4 across:3 giannakis:1 contradicts:1 b:1 primate:1 making:1 explained:1 invariant:2 taken:1 equation:3 granger:1 mechanism:2 fail:1 studying:1 available:1 generalizes:1 observe:2 fluctuating:1 spectral:12 appropriate:1 occurrence:1 schmidt:12 alternative:2 rkhss:2 original:1 thomas:1 top:7 graphical:1 opportunity:1 k1:2 especially:2 classical:2 tensor:3 strategy:1 primary:1 dependence:1 xc2:3 exhibit:1 hoyer:1 link:1 mapped:4 simulated:3 capacity:1 tuebingen:3 assuming:3 meg:1 length:1 relationship:2 z3:1 providing:1 difficult:1 setup:1 cij:4 stated:1 negative:1 lagged:1 reliably:1 upper:2 neuron:1 markov:11 datasets:2 finite:1 minh:1 t:4 defining:2 extended:1 varied:1 reproducing:5 arbitrary:8 introduced:2 pair:1 required:1 kl:1 faithfulness:4 acoustic:2 kcsd:36 established:2 pearl:1 address:1 beyond:1 able:2 suggested:1 dynamical:5 usually:1 agnan:1 reliable:2 interpretability:1 including:1 power:1 rely:1 hybrid:2 quantification:1 scheme:1 movie:2 imply:1 coupled:2 text:1 literature:1 asymptotic:2 permutation:1 interesting:1 proportional:1 acyclic:1 h2:6 translation:2 course:2 copy:1 bias:3 allow:1 understand:1 xc1:5 wide:3 characterizing:1 anesthetized:2 k12:2 distributed:1 dimension:1 transition:5 rich:1 simplified:1 transaction:1 sj:1 implicitly:1 bernhard:1 cutting:2 global:1 assumed:1 xi:11 factorize:1 spectrum:6 nature:1 learn:1 eeg:1 complex:7 domain:6 elegantly:1 diag:2 significance:1 main:1 arrow:2 x1:36 site:1 advice:1 causality:2 neuroimage:1 periodogram:5 dominik:1 theorem:19 specific:2 showing:3 pac:1 symbol:2 svm:1 bivariate:5 false:1 illustrates:1 kx:1 boston:1 shimizu:1 generalizing:1 visual:7 scalar:7 sindhwani:1 mij:2 corresponds:1 khs:1 relies:2 acm:1 ma:1 conditional:1 goal:1 rbf:10 included:1 wt:14 lemma:6 principal:1 called:2 isomorphic:1 experimental:2 select:1 support:1 assessed:1 cumulant:3 violated:1 absolutely:1 correlated:1 |
4,508 | 508 | A Topographic Product for the Optimization
of Self-Organizing Feature Maps
Hans-Ulrich Bauer, Klaus Pawelzik, Theo Geisel
Institut fUr theoretische Physik and SFB Nichtlineare Dynamik
Universitat Frankfurt
Robert-Mayer-Str. 8-10
W -6000 Frankfurt 11
Fed. Rep . of Germany
email: [email protected]
Abstract
Optimizing the performance of self-organizing feature maps like the Kohonen map involves the choice of the output space topology. We present
a topographic product which measures the preservation of neighborhood
relations as a criterion to optimize the output space topology of the map
with regard to the global dimensionality DA as well as to the dimensions in the individual directions. We test the topographic product method
not only on synthetic mapping examples, but also on speech data. In the
latter application our method suggests an output space dimensionality of
DA = 3, in coincidence with recent recognition results on the same data
set.
1
INTRODUCTION
Self-organizing feature maps like the Kohonen map (Kohonen, 1989, Ritter et al.,
1990) not only provide a plausible explanation for the formation of maps in brains,
e.g. in the visual system (Obermayer et al., 1990), but have also been applied to
problems like vector quantization, or robot arm control (Martinetz et al., 1990).
The underlying organizing principle is the preservation of neighborhood relations.
For this principle to lead to a most useful map, the topological structure of the
output space must roughly fit the structure of the input data. However, in technical
1141
1142
Bauer, Pawelzik, and Geisel
applications this structure is often not a priory known. For this reason several
attempts have been made to modify the Kohonen-algorithm such, that not only
the weights, but also the output space topology itself is adapted during learning
(Kangas et al., 1990, Martinetz et al., 1991).
Our contribution is also concerned with optimal output space topologies, but we
follow a different approach, which avoids a possibly complicated structure of the
output space. First we describe a quantitative measure for the preservation of neighborhood relations in maps, the topographic product P. The topographic product
had been invented under the name of" wavering product" in nonlinear dynamics in
order to optimize the embeddings of chaotic attractors (Liebert et al., 1991). P 0
indicates perfect match of the topologies. P < 0 (P > 0) indicates a folding of
the output space into the input space (or vice versa), which can be caused by a
too small (resp. too large) output space dimensionality. The topographic product
can be computed for any self-organizing feature map, without regard to its specific
learning rule. Since judging the degree of twisting and folding by visually inspecting a plot of the map is the only other way of "measuring" the preservation of
neighborhoods, the topographic product is particularly helpful, if the input space
dimensionality of the map exceeds DA = 3 and the map can no more be visualized.
Therefore the derivation of the topographic product is already of value by itself.
=
In the second part of the paper we demonstrate the use of the topographic product
by two examples. The first example deals with maps from a 2D input space with
I,lonflat stimulus distribution onto rectangles of different aspect ratios, the second
example with the map of 19D speech data onto output spaces of different dimensionality. In both cases we show, how the output space topology can be optimized
using our method.
2
2.1
DERIVATION OF THE TOPOGRAPHIC PRODUCT
KOHONEN.ALGORlTHM
In order to introduce the notation necessary to derive the topographic product, we
very briefly recall the Kohonen algorithm. It describes a map from an input space
V into an output space A. Each node j in A has a weight vector Wj associated with
i.t, which points into V. A stimulus v is mapped onto that node i in the output
space, which minimizes the input space distance dV (Wi, v):
(1)
During a learning step, a random stimulus is chosen in the input space and mapped
onto an output node i according to Eq. 1. Then all weights Wj are shifted towards v,
with the amount of shift for each weight vector being determined by a neighborhood
function h~,j:
(2)
(dA(j, i) measures distances in the output space.) hj i effectively restricts the nodes
participating in the learning step to nodes in the vl~inity of i. A typical choice for
A Topograhic Product for the Optimization of Self-Organizing Feature Maps
the neighborhood function is
(3)
In this way the neighborhood relations in the output space are enforced in the
input space, and the output space topology becomes of crucial importance. Finally
it should be mentioned that the learning step size c as well as the width of the
neighborhood function u are decreased during the learning for the algorithm to
converge to an equilibrium state. A typical choice is an exponential decrease . For
a detailed discussion of the convergence properties of the algorithm, see (Ritter et
al., 1988).
2.2
TOPOGRAPHIC PRODUCT
After the learning phase, the topographic product is computed as follows. For each
output space node j, the nearest neighbor ordering in input space and output space
is computed (nt(j) denotes the k-th nearest neighbor of j in A, n"y (j) in V). Using
these quantities, we define the ratios
QI(j,k)
Q2(j, k)
=
dV (Wj, wn~(j)
dV (Wj, wn~(j) '
dA (j, nt (j?
dA(j, n"y (j?
(4)
(5)
=
One has QI(j, k)
Q2(j, k)
1 only, if the k-th nearest neighbors in V and A
coincide. Any deviations of the nearest neighbor ordering will result in values for
QI.2 deviating from 1. However, not all differences in the nearest neighbor orderings
in V and A are necessarily induced by neighborhood violations. Some can be due
to locally varying magnification factors of the map, which in turn are induced by
spatially varying stimulus densities in V. To cancel out the latter effects, we define
the products
PI(j, k)
(6)
P 2 (j, k)
(7)
For these the relations
PI(j, k)
P2 (j, k)
> 1,
<
1
hold. Large deviations of PI (resp. P2) from the value 1 indicate neighborhood
violations, when looking from the output space into the input space (resp. from the
input space into the output space). In order to get a symmetric overall measure,
we further multiply PI and P2 and find
P3(j, k)
(8)
1143
1144
Bauer, Pawelzik, and Geisel
Further averaging over all nodes and neighborhood orders finally yields the topographic product
p
1
=
N N-l
N(N - 1)
f; ~
log(P3(j, k)).
(9)
The possible values for P are to be interpreted as follows:
P < 0:
PO:
P > 0:
output space dimension DA too low,
output space dimension DA o.k.,
output space dimension DA too high .
These formulas suffice to understand how the product is to be computed. A more
detailed explanation for the rational behind each individual step of the derivation
can be found in a forthcoming publication (Bauer et al., 1991).
3
EXAMPLES
We conclude the paper with two examples which exemplify how the method works.
3.1
ILLUSTRATIVE EXAMPLE
The first example deals with the mapping from a 2D input space onto rectangles of
different aspect ratios. The stimulus distribution is flat in one direction, Gaussian
Rhaped in the other (Fig 1a). The example demonstrates two aspects of our method
at once. First it shows that the method works fine with maps resulting from nonflat
stimulus distributions. These induce spatially varying areal magnification factors
of the map, which in turn lead to twists in the neighborhood ordering between
input space and output space. Compensation for such twists was the purpose of
the multiplication in Eqs (6) and (7) .
Table 1: Topographic product P for the map from a square input space with a
Gaussian stimulus distribution in one direction, onto rectangles with different aspect
ratios. The values for P are averaged over 8 networks each. The 43x 6-output space
matches the input data best, since its topographic product is smallest.
N
256x 1
128x2
64x4
43x6
32x8
21 x 12
16x 16
aspect ratio
256
64
16
7.17
4
1.75
1
P
-0.04400
-0.03099
-0.00721
0.00127
0.00224
0.01335
0.02666
A Topograhic Product for the Optimization of Self-Organizing Feature Maps
1.0.
P(x.yl
to.
as
o.S
0..6
0..4
0..6
>.
0.2
0.4
0.2
0.
0.2
y..
0..4
X
Fig. la
0..6
D.S
Fig. Ib
1.0.
to.
I
D.S
'-
0.6
f-
--
-
0..8
-
0..6
>.
~
-
0..4
0..2 ,...
-
0..2
0..4
0.
0..2
Fig. Ie
0..4
x
0.6
D.S
0.
0..2
0..L.
X
0..6
0..8
Fig. Id
Figure 1: Self-organizing feature maps of a Gaussian shaped (a) 2-dimensional
stimulus distribution onto output spaces with 128 x 2 (b), 43 x 6 (c) and 16 x 16
(d) output nodes. The 43 x 6-output space preserves neighborhood relations best.
1145
1146
Bauer, Pawelzik, and Geisel
Secondly the method cannot only be used to optimize the overall output space
dimensionality, but also the individual dimensions in the different directions (i.e.
the different aspect ratios). If the rectangles are too long, the resulting map is
folded like a Peano curve (Fig. Ib), and neighborhood relations are severely violated
perpendicular to the long side of the rectangle. If the aspect ratio fits, the map has
a regular look (Fig. lc), neighborhoods are preserved. The zig-zag-form at the
outer boundary of the rectangle does not correspond to neighborhood violations.
If the rectangle approaches a square, the output space is somewhat squashed into
the input space, again violating neighborhood relations (Fig. Id). The topographic
product P coincides with this intuitive evaluation (Tab. 1) and picks the 43 x 6-net
38 the most neighborhood preserving one.
3.2
APPLICATION EXAMPLE
In our second example speech data is mapped onto output spaces of various dimensionality. The data represent utterances of the ten german digits, given as
19-dimensional acoustical feature vectors (GramB et al., 1990). The P-values for
the different maps are given in Tab. 2. For both the speaker-dependent as well as
the speaker-independent case the method distinguishes the maps with DA
3 as
most neighborhood preserving. Several points are interesting about these results.
First of all, the suggested output space dimensionality exceeds the widely used
DA = 2. Secondly, the method does not generally judge larger output space dimensions as more neighborhood preserving, but puts an upper bound on DA. The
data seems to occupy a submanifold of the input space which is distinctly lower
than four dimensional. Furthermore we see that the transition from one to several
speakers does not change the value of DA which is optimal under neighborhood
considerations. This contradicts the expectation that the additional interspeaker
variance in the data occupies a full additional dimension.
=
Table 2: Topographic product P for maps from speech feature vectors in a 19D
ir. put space onto output spaces of different dimensionality D V.
DV
1
2
3
4
N
256
16x 16
7x6x6
4x4x4x4
P
speakerdependent
P
speakerindependent
-0.156
-0.028
0.019
0.037
-0.229
-0.036
0.007
0.034
What do these results mean for speech recognition? Let us suppose that several
utterances of the same word lead to closeby feature vector sequences in the input
space. If the mapping was not neighborhood preserving, one should expect the trajectories in the output space to be separated considerably. If a speech recognition
system compares these output space trajectories with reference trajectories corresponding to reference utterances of the words, the probability of misclassification
rises. So one should expect that a word recognition system with a Kohonen-map
A Topograhic Product for the Optimization of Self-Organizing Feature Maps
preprocessor and a subsequent trajectory classifier should perform better if the
neighborhoods in the map are preserved.
The results of a recent speech recognition experiment coincide with these heuristic
expectations (Brandt et al., 1991). The experiment was based on the same data
set, made use of a Kohonen feature map as a preprocessor, and of a dynamic timewarping algorithm as a sequence classifier. The recognition performance of this
hybrid system turned out to be better by about 7% for a 3D map, compared to a
2D map with a comparable number of nodes (0.795 vs. 0.725 recognition rate).
Acknowledgements
This work was supported by the Deutsche Forschungsgemeinschaft through SFB
185 "Nichtlineare Dynamik", TP A10.
References
H.-U. Bauer, K. Pawelzik, Quantifying the Neighborhood Preservation of SelfOrganizing Feature Maps, submitted to IEEE TNN (1991).
W.D. Brandt, H. Behme, H.W. Strube, Bildung von Merkmalen zur Spracherkennung mittels Phonotopischer Karten, Fortschritte der Akustik - Proc. of DAGA 91
(DPG GmbH, Bad Honnef), 1057 (1991).
T. GramB, H.W. Strube, Recognition of Isolated Words Based on Psychoacoustics
and Neurobiology, Speech Comm. 9, 35 (1990).
J .A. Kangas, T.K. Kohonen, J .T. Laaksonen, Variants of Self-Organizing Maps,
IEEE Trans. Neur. Net. 1,93 (1990).
T. Kohonen, Self-Organization and Associative Memory, 3rd Ed., Springer (1989).
W. Liebert, K. Pawelzik, H.G. Schuster, Optimal Embeddings of Chaotic Attractors
from Topological Considerations, Europhysics Lett. 14,521 (1991).
T. Martinetz, H. Ritter, K. Schulten, Three-Dimensional Neural Net for Learning
Visuomotor Coordination of a Robot Arm, IEEE Trans. Neur. Net. 1, 131 (1990).
T. Martinetz, K. Schulten, A "Neural-Gas" Network Learns Topologies, Proc.
ICANN 91 Helsinki, ed. Kohonen et al., North-Holland, 1-397 (1991).
K. Obermaier, H. Ritter, K. Schulten, A Principle for the Formation of the Spatial
Structure of Cortical Feature Maps, Proc. Nat. Acad. Sci. USA 87, 8345 (1990).
H. Ritter, K. Schulten, Convergence Properties of Kohonen's Topology Conserving
Maps: Fluctuations, Stability and Dimension Selection, BioI. Cyb. 60, 59-71
(1988).
H. Ritter, T. Martinetz, K. Schulten, Neuronale Netze, Addison Wesley (1990).
1147
PART XIV
PERFORMANCE
COMPARISONS
| 508 |@word briefly:1 seems:1 physik:2 pick:1 nt:2 must:1 subsequent:1 speakerindependent:1 plot:1 v:1 nichtlineare:2 node:9 brandt:2 introduce:1 roughly:1 brain:1 pawelzik:6 str:1 becomes:1 underlying:1 notation:1 suffice:1 deutsche:1 what:1 dynamik:2 minimizes:1 interpreted:1 q2:2 quantitative:1 demonstrates:1 classifier:2 control:1 modify:1 severely:1 acad:1 id:2 fluctuation:1 xiv:1 suggests:1 perpendicular:1 averaged:1 chaotic:2 digit:1 word:4 induce:1 regular:1 speakerdependent:1 get:1 onto:9 cannot:1 selection:1 put:2 optimize:3 map:38 closeby:1 rule:1 stability:1 resp:3 suppose:1 recognition:8 particularly:1 magnification:2 invented:1 coincidence:1 wj:4 ordering:4 decrease:1 zig:1 mentioned:1 comm:1 dynamic:2 cyb:1 po:1 various:1 derivation:3 separated:1 describe:1 visuomotor:1 klaus:1 formation:2 neighborhood:24 heuristic:1 widely:1 plausible:1 larger:1 topographic:18 itself:2 associative:1 sequence:2 net:4 product:24 kohonen:12 turned:1 organizing:10 conserving:1 intuitive:1 participating:1 convergence:2 perfect:1 derive:1 nearest:5 eq:2 p2:3 geisel:4 involves:1 indicate:1 judge:1 direction:4 occupies:1 inspecting:1 secondly:2 asgard:1 hold:1 visually:1 bildung:1 equilibrium:1 mapping:3 smallest:1 purpose:1 proc:3 coordination:1 vice:1 gaussian:3 hj:1 varying:3 publication:1 fur:1 indicates:2 helpful:1 dependent:1 vl:1 relation:8 germany:1 overall:2 spatial:1 once:1 shaped:1 x4:1 look:1 cancel:1 stimulus:8 distinguishes:1 preserve:1 individual:3 deviating:1 phase:1 attractor:2 attempt:1 organization:1 multiply:1 evaluation:1 violation:3 behind:1 necessary:1 institut:1 isolated:1 tp:1 laaksonen:1 measuring:1 deviation:2 submanifold:1 too:5 universitat:1 synthetic:1 considerably:1 density:1 ie:1 ritter:6 yl:1 again:1 von:1 obermaier:1 possibly:1 north:1 caused:1 tab:2 complicated:1 contribution:1 square:2 ir:1 variance:1 yield:1 correspond:1 theoretische:1 trajectory:4 submitted:1 ed:2 email:1 a10:1 associated:1 rational:1 recall:1 exemplify:1 dimensionality:9 wesley:1 violating:1 follow:1 x6:1 furthermore:1 nonlinear:1 name:1 effect:1 usa:1 spatially:2 symmetric:1 deal:2 during:3 self:10 width:1 illustrative:1 speaker:3 coincides:1 criterion:1 demonstrate:1 consideration:2 twist:2 x6x6:1 versa:1 areal:1 frankfurt:3 rd:1 had:1 robot:2 han:1 recent:2 dpg:1 optimizing:1 rep:1 der:1 preserving:4 additional:2 somewhat:1 converge:1 preservation:5 full:1 exceeds:2 technical:1 match:2 long:2 europhysics:1 qi:3 variant:1 expectation:2 represent:1 folding:2 preserved:2 zur:1 fine:1 decreased:1 crucial:1 martinetz:5 induced:2 forschungsgemeinschaft:1 embeddings:2 concerned:1 wn:2 fit:2 forthcoming:1 topology:9 shift:1 sfb:2 speech:8 useful:1 generally:1 detailed:2 selforganizing:1 amount:1 locally:1 ten:1 visualized:1 occupy:1 restricts:1 shifted:1 judging:1 four:1 rectangle:7 enforced:1 p3:2 comparable:1 bound:1 topological:2 adapted:1 inity:1 x2:1 flat:1 helsinki:1 aspect:7 according:1 neur:2 describes:1 contradicts:1 wi:1 dv:4 turn:2 german:1 addison:1 fed:1 dbp:1 denotes:1 already:1 quantity:1 squashed:1 obermayer:1 distance:2 mapped:3 sci:1 outer:1 acoustical:1 priory:1 reason:1 ratio:7 robert:1 rise:1 perform:1 upper:1 compensation:1 gas:1 neurobiology:1 looking:1 kangas:2 timewarping:1 optimized:1 mayer:1 twisting:1 trans:2 suggested:1 memory:1 explanation:2 misclassification:1 hybrid:1 arm:2 x8:1 utterance:3 liebert:2 acknowledgement:1 multiplication:1 expect:2 interesting:1 degree:1 principle:3 ulrich:1 pi:4 supported:1 theo:1 neuronale:1 side:1 understand:1 neighbor:5 distinctly:1 bauer:7 regard:2 curve:1 dimension:8 boundary:1 transition:1 avoids:1 lett:1 cortical:1 made:2 coincide:2 uni:1 global:1 conclude:1 table:2 necessarily:1 da:13 icann:1 gmbh:1 fig:8 lc:1 schulten:5 exponential:1 ib:2 learns:1 formula:1 preprocessor:2 bad:1 specific:1 tnn:1 quantization:1 effectively:1 importance:1 nat:1 fortschritte:1 psychoacoustics:1 visual:1 holland:1 springer:1 bioi:1 quantifying:1 towards:1 change:1 determined:1 typical:2 folded:1 averaging:1 la:1 zag:1 latter:2 violated:1 schuster:1 |
4,509 | 5,080 | Robust Low Rank Kernel Embeddings of
Multivariate Distributions
Le Song, Bo Dai
College of Computing, Georgia Institute of Technology
[email protected], [email protected]
Abstract
Kernel embedding of distributions has led to many recent advances in machine
learning. However, latent and low rank structures prevalent in real world distributions have rarely been taken into account in this setting. Furthermore, no prior
work in kernel embedding literature has addressed the issue of robust embedding
when the latent and low rank information are misspecified. In this paper, we
propose a hierarchical low rank decomposition of kernels embeddings which can
exploit such low rank structures in data while being robust to model misspecification. We also illustrate with empirical evidence that the estimated low rank
embeddings lead to improved performance in density estimation.
1
Introduction
Many applications of machine learning, ranging from computer vision to computational biology,
require the analysis of large volumes of high-dimensional continuous-valued measurements. Complex statistical features are commonplace, including multi-modality, skewness, and rich dependency
structures. Kernel embedding of distributions is an effective framework to address challenging problems in this regime [1, 2]. Its key idea is to implicitly map distributions into potentially infinite dimensional feature spaces using kernels, such that subsequent comparison and manipulation of these
distributions can be achieved via feature space operations (e.g., inner product, distance, projection
and spectral analysis). This new framework has led to many recent advances in machine learning
such as kernel independence test [3] and kernel belief propagation [4].
However, algorithms designed with kernel embeddings have rarely taken into account latent and
low rank structures prevalent in high dimensional data arising from various applications such as
gene expression analysis. While these information have been extensively exploited in other learning
contexts such as graphical models and collaborative filtering, their use in kernel embeddings remains scarce and challenging. Intuitively, these intrinsically low dimensional structures of the data
should reduce the effect number of parameters in kernel embeddings, and allow us to obtain a better
estimator when facing with high dimensional problems.
As a demonstration of the above intuition, we illustrate the behavior of low rank kernel embeddings
(which we will explain later in more details) when applied to density estimation (Figure 1). 100 data
points are sampled i.i.d. from a mixture of 2 spherical Gaussians, where the latent variable is the
cluster indicator. The fitted density based on an ordinary kernel density estimator has quite different
contours from the ground truth (Figure 1(b)), while those provided by low rank embeddings appear
to be much closer to the ground truth ((Figure 1(c)). Essentially, the low rank approximation step
endows kernel embeddings with an additional mechanism to smooth the estimator which can be
beneficial when the number of data points is small and there are clusters in the data. In our later
more systematic experiments, we show that low rank embeddings can lead to density estimators
which can significantly improve over alternative approaches in terms of held-out likelihood.
While there are a handful of exceptions [5, 6] in the kernel embedding literature which have exploited latent and low rank information, these algorithms are not robust in the sense that, when such
information are misspecification, no performance guarantee can be provided and these algorithms
can fail drastically. The hierarchical low rank kernel embeddings we proposed in this paper can be
1
3
2
1
0
?1
?2
?3
?3
3
2
1
0
?1
?2
?3
?3
3
2
1
0
?1
?2
?3
?3
?2
?2
?2
?1
?1
?1
0
0
0
1
1
1
2
2
2
3
3
3
(a) Ground truth
(b) Ordinary KDE
(c) Low Rank KDE
Figure 1: We draw 100 samples from a mixture of 2 spherical Gaussians with equal mixing weights.
(a) the contour plot for the ground truth density, (b) for ordinary kernel density estimator (KDE), (c)
for low rank KDE. We used cross-validation to find the best kernel bandwidth for both the KDE and
low rank KDE. The latter produces a density which is visibly closer to the ground truth, and in term
of the integrated square error, it is smaller than the KDE (0.0092 vs. 0.012).
considered as a kernel generalization of the discrete valued tree-structured latent variable models
studied in [7]. The objective of the current paper is to address previous limitations of kernel embeddings as applied to graphical models and make them more practically useful. Furthermore, we will
provide both theoretical and empirical support to the new approach.
Another key contribution of the paper is a novel view of kernel embedding of multivariate distributions as infinite dimensional higher order tensors, and the low rank structure of these tensors in
the presence of latent variables. This novel view allows us to introduce modern multi-linear algebra and tensor decomposition tools to address challenging problems in the interface between kernel
methods and latent variable models. We believe our work will play a synergistic role in bridging together largely separate areas in machine learning research, including kernel methods, latent variable
models, and tensor data analysis.
In the remainder of the paper, we will first present the tensor view of kernel embeddings of multivariate distributions and its low rank structure in the presence of latent variables. Then we will
present our algorithm for hierarchical low rank decomposition of kernel embeddings by making use
of a sequence of nested kernel singular value decompositions. Last, we will provide both theoretical
and empirical support to our proposed approach.
2
Kernel Embeddings of Distributions
We will focus on continuous domains, and denote X a random variable with domain ? and density
p(X). The instantiations of X are denoted by lower case character, x. A reproducing kernel Hilbert
space (RKHS) F on ? with a kernel k(x, x0 ) is a Hilbert space of functions f : ? 7? R with inner
product h?, ?iF . Its element k(x, ?) satisfies the reproducing property: hf (?), k(x, ?)iF = f (x), and
consequently, hk(x, ?), k(x0 , ?)iF = k(x, x0 ), meaning that we can view the evaluation of a function
f at any point x ? ? as an inner product. Alternatively, k(x, ?) can be viewed as an implicit feature
map ?(x) where k(x, x0 ) = h?(x), ?(x0 )iF . For simplicity of notation, we assumes that the domain
of all variables are the same and the same kernel function is applied to all variables.
RA kernel embedding represents a density by its expected features, i.e., ?X := EX [?(X)] =
?(x)p(x)dx, or a point in a potentially infinite-dimensional and implicit feature space of a k?
ernel [8, 1, 2]. The embedding ?X has the property that the expectation of any RKHS function f ? F can be evaluated as an inner product in F, h?X , f iF := EX [f (X)]. Kernel
embeddings can be readily generalized to joint density of d variables, X1 , . . . , Xd , using dth order tensor product feature space F d , In this feature space, the feature map is defined as
?d ?(xi ) := ?(x1 ) ? ?(x2 ) ? . . . ? ?(xd ), and the inner product in this space satisfies
i=1
Qd
Qd
?di=1 ?(xi ), ?di=1 ?(x0i ) F d = i=1 h?(xi ), ?(x0i )iF = i=1 k(xi , x0i ). Then we can embed a
joint density p(X1 , . . . , Xd ) into a tensor product feature space F d by
Z
d
Y
d
CX1:d := EX1:d ?i=1 ?(Xi ) =
?di=1 ?(xi ) p(x1 , . . . , xd )
dxi ,
(1)
?d
where we used X1:d to denote the set of variables {X1 , . . . , Xd }.
2
i=1
The kernel embeddings can also be generalized to conditional densities p(X|z) [9]
Z
?X|z := EX|z [?(X)] =
?(x) p(x|z) dx
(2)
?
Given this embedding,
the
conditional expectation of a function f ? F can be computed as
EX|z [f (X)] = f, ?X|z F . Unlike the ordinary embeddings, an embedding of conditional distribution is not a single element in the RKHS, but will instead sweep out a family of points in the
RKHS, each indexed by a fixed value z of the conditioning variable Z. It is only by fixing Z to a
particular value z, that we will be able to obtain a single RKHS element, ?X|z ? F. In other words,
conditional embedding is an operator, denoted as CX|Z , which can take as input an z and output an
embedding, i.e., ?X|z = CX|Z ?(z). Likewise, kernel embedding of conditional distributions can
also be generalized to joint distribution of d variables.
We will represent an observation from a discrete variable Z taking r possible value using the standard basis in Rr (or one-of-r representation). That is when z takes the i-th value, the i-th dimension
of vector z is 1 and other dimensions 0. For instance, when r = 3, Z can take three possible value (1, 0, 0)> , (0, 1, 0)> and (0, 0, 1)> . In this case, we let ?(Z) = Z and use the linear kernel
k(Z, Z 0 ) = Z > Z. Then, the conditional embedding operator reduces to a separate embedding ?X|z
for each conditional density p(X|z). Conceptually, we can concatenate these ?X|z for different value of z in columns CX|Z := (?X|z=(1,0,0)> , ?X|z=(0,1,0)> , ?X|z=(0,0,1)> ). The operation CX|Z ?(z)
essentially picks up the corresponding embedding (or column).
3
Kernel Embeddings as Infinite Dimensional Higher Order Tensors
The above kernel embedding CX1:d can also be viewed as a multi-linear operator (tensor) of order
d mapping from F ? . . . ? F to R. (For generic introduction to tensor and tensor notation, please
see [10]). The operator is linear in each argument (mode) when fixing other arguments. Furthermore,
d
the application of the operator to a set of elements {fi ? F}i=1 can be defined using the inner
product from the tensor product feature space, i.e.,
" d
#
Y
d
CX1:d ?1 f1 ?2 . . . ?d fd := CX1:d , ?i=1 fd F d = EX1:d
h?(Xi ), fi iF ,
(3)
i=1
where ?i means applying fi to the i-th argument of CX1:d . Furthermore, we can define the generP?
P?
2
alized Frobenius norm k?k? of CX1:d as kCX1:d k? = i1 =1 ? ? ? id =1 (CX1:d ?1 ei1 ?2 . . . ?d eid )2
?
using an orthonormal basis {ei }i=1 ? F. We can also define the inner product for the space of such
operator that kCX1:d k? < ? as
?
?
D
E
X
X
CX1:d , CeX1:d =
???
(CX1:d ?1 ei1 ?2 . . . ?d eid )(CeX1:d ?1 ei1 ?2 . . . ?d eid ). (4)
?
i1 =1
id =1
When CX1:d has the form of EX1:d ?di=1 ?(Xi ) , the above inner product reduces to EX1:d [CeX1:d ?1
?(X1 ) ?2 . . . ?d ?(Xd )].
In this paper, the ordering of the tensor modes is not essential so we simply label them using the
corresponding random variables. We can reshape a higher order tensor into a lower order tensor by
partitioning its modes into several disjoint groups. For instance, let I1 = {X1 , . . . , Xs } be the set
of modes corresponding to the first s variables and I2 = {Xs+1 , . . . , Xd }. Similarly to the Matlab
function, we can obtain a 2nd order tensor by
CI1 ;I2 = reshape (CX1:d , I1 , I2 ) : F s ? F d?s 7? R.
(5)
In the reverse direction, we can also reshape a lower order tensor into a higher order one by further
partitioning certain mode of the tensor. For instance, we can partition I1 into I10 = {X1 , . . . , Xt }
and I100 = {Xt+1 , . . . , Xs }, and turn CI1 ;I2 into a 3rd order tensor by
CI10 ;I100 ;I2 = reshape (CI1 ;I2 , I10 , I100 , I2 ) : F t ? F s?t ? F d?s 7? R.
?
{ei }i=1
(6)
Note that given a orthonormal basis
? F, we can readily obtain an orthonormal basis
?
for, e.g., F t , as {ei1 ? . . . ? eit }i1 ,...,it =1 , and hence define the generalized Frobenius norm for
CI1 ;I2 and CI10 ;I100 ;I2 . This also implies that the generalized
Frobenius
norms are the same for all
these reshaped tensors, i.e., kCX1:d k? = kCI1 ;I2 k? =
CI10 ;I100 ;I2
? .
3
X1
Z
X3
Z1
X1
X2
(a) X1 ? X2 |Z
X2
Z1
Z2
Zd
X1
X2
Xd
Z2
X4
(b) X1:2 ? X3:4 |Z1:2
(c) Caterpillar tree (hidden Markov model)
Figure 2: Three latent variable model with different tree topologies
The 2nd order tensor CI1 ;I2 can also be viewed as the cross-covariance operator between two sets of
variables in I1 and I2 . In this case, we can essentially use notation and
P?operations for matrices. For
instance, we can perform singular value decomposition of CI1 ;I2 = i=1 si (ui ?vi ) where si ? R
?
?
are ordered in nonincreasing manner, {ui }i=1 ? F s and {vi }i=1 ? F d?s are singular vectors. The
rank of CI1 ;I2 is the smallest r such that si = 0 for i ? r. In this case, we will also define
Ur = (u1 , u2 , . . . , ur ), Vr = (v1 , v2 , . . . , vr ) and Sr = diag (s1 , s2 , . . . , sr ), and denote the low
rank approximation as CI1 ;I2 = Ur Sr Vr> . Finally, a 1st order tensor reshape (CX1:d , {X1:d } , ?),
is simply a vector where we we will use vector notation.
4
Low Rank Kernel Embeddings Induced by Latent Variables
In the presence of latent variables, the kernel embedding CX1:d will be low rank. For example,
the two observed variables X1 and X2 in the example in Figure 1 is conditional independent given
P the latent cluster indicator variable Z. That is the joint density factorizes as p(X1 , X2 ) =
z p(z)p(X1 |z)p(X2 |z) (see Figure 2(a) for the graphical model). Throughout the paper, we assume that z is discrete and takes r possible values. Then the embedding CX1 X2 of p(X1 , X2 ) has a
rank at most r. Let z be represented as the standard basis in Rr . Then
>
(7)
CX1 X2 = EZ EX1 |Z [?(X1 )]Z ? EX2 |Z [?(X2 )]Z = CX1 |Z EZ [Z ? Z] CX2 |Z
where EZ [Z ? Z] is an r ? r matrix, and hence restricting the rank of CX1 X2 to be at most r.
In our second example, four observed variables are connected via two latent variables Z1 and Z2
each taking r possible values. The
Pconditional independence structure implies that the density of
p(X1 , X2 , X3 , X4 ) factorizes as z1 ,z2 p(X1 |z1 )p(X2 |z1 )p(z1 , z2 )p(X3 |z2 )p(X4 |z2 ) (see Figure 2(b) for the graphical model). Reshaping its kernel embedding CX1:4 , we obtain CX1:2 ;X3:4 =
reshape (CX1:4 , {X1:2 } , {X3:4 }) which factorizes as
>
(8)
EX1:2 |Z1 [?(X1 ) ? ?(X2 )] EZ1 Z2 [Z1 ? Z2 ] EX3:4 |Z2 [?(X3 ) ? ?(X4 )]
where EZ1 Z2 [Z1 ? Z2 ] is an r ? r matrix. Hence the intrinsic ?rank? of the reshaped embedding is
only r, although the original kernel embedding CX1:4 is a 4th order tensor with infinite dimensions.
In general, for a latent variable model p(X1 , . . . , Xd ) where the conditional independence structure
is a tree T , various reshapings of its kernel embedding CX1:d according to edges in the tree will be
low rank. More specifically, each edge in the latent tree corresponds to a pair of latent variables
(Zs , Zt ) (or an observed and a hidden variable (Xs , Zt )) which induces a partition of the observed
variables into two groups, I1 and I2 . One can imagine splitting the latent tree into two subtrees by
cutting the edge. One group of variables reside in the first subtree, and the other group in the second
subtree. If we reshape the tensor according to this partitioning, then
Theorem 1 Assume that all observed variables are leaves in the latent tree structure, and all latent
variables take r possible values, then rank(CI1 ;I2 ) ? r.
Proof
P PDue to the conditional independence structure induced by the latent tree, p(X1 , . . . , Xd ) =
zs
zt p(I1 |zs )p(zs , zt )p(I2 |zt ). Then its embedding can be written as
>
CI1 ;I2 = CI1 |Zs EZs Zt [Zs ? Zt ] CI2 |Zt ,
(9)
where CI1 |Zs and CI2 |Zt are the conditional embedding operators for p(I1 |zs ) and p(I2 |zt ) respectively. Since EZs Zt [Zs ? Zt ] is a r ? r matrix, rank(CI1 ;I2 ) ? r.
Theorem 1 implies that, given a latent tree model, we obtain a collection of low rank reshapings
{CI1 ;I2 } of the kernel embedding CX1:d , each corresponding to an edge (Zs , Zt ) of the tree. We
4
will denote by H(T , r) the class of kernel embeddings CX1:d whose various reshapings according to
the latent tree T have rank at most r.1 We will also use CX1:d ? H(T , r) to indicator such a relation.
In practice, the latent tree model assumption may be misspecified for a joint density p(X1 , . . . , Xd ),
and consequently the various reshapings of its kernel embedding CX1:d are only approximately low
rank. In this case, we will instead impose a (potentially misspecified) latent structure T and a fixed
rank r on the data and obtain an approximate low rank decomposition of the kernel embedding. The
goal is to obtain a low rank embedding CeX1:d ? H(T , r), while at the same time insure kCeX1:d ?
CX1:d k? is small. In the following, we will present such a decomposition algorithm.
5
Low Rank Decomposition of Kernel Embeddings
For simplicity of exposition, we will focus on the case where the latent tree structure T has a caterpillar shape (Figure 2(c)). This decomposition can be viewed as a kernel generalization of the hierarchical tensor decomposition in [11, 12, 7]. The decomposition proceeds by reshaping the kernel
embedding CX1:d according to the first edge (Z1 , Z2 ), resulting in A1 := CX1 ;X2:d . Then we perform
a rank r approximation for it, resulting in A1 ? Ur Sr Vr> . This leads to the first intermediate tensor
G1 = Ur , and we reshape Sr Vr> and recursively decompose it. We note that Algorithm 1 contains
only pseudo codes, and not implementable in practice since the kernel embedding to decompose can
have infinite dimensions. We will design a practical kernel algorithm in the next section.
Algorithm 1 Low Rank Decomposition of Kernel Embeddings
In: A kernel embedding CX1:d , the caterpillar tree T and desired rank r
Out: A low rank embedding CeX1:d ? H(T , r) as intermediate tensors {G1 , . . . , Gd }
1: A1 = reshape(CX1:d , {X1 } , {X2:d }) according to tree T .
2: A1 ? Ur Sr Vr> , approximate A1 using its r leading singular vectors.
3: G1 = Ur , and B1 = Sr Vr> . G1 can be viewed as a model with two variables, X1 and Z1 ; and
B1 as a new caterpillar tree model T1 with variable X1 removed from T .
4: for j = 2, . . . , d ? 1 do
5:
Aj = reshape(Bj?1 , {Zj?1 , Xj } , {Xj+1:d }) according to tree Tj?1 .
6:
Aj ? Ur Sr Vr> , approximate Aj using its r leading singular vectors.
7:
Gj = reshape(Ur , {Zj?1 } , {Xj } , {Zj }), and Bj = Sr Vr> . Gj can be viewed as a model
with three variables, Xj , Zj and Zj?1 ; and Bj as a new caterpillar tree model Tj with variable
Zj?1 and Xj removed from Tj?1 .
8: end for
9: Gd = Bd?1
Once we finish the decomposition, we obtain the low rank representation of the kernel embedding
as a set of intermediate tensors {G1 , . . . , Gd }. In particular, we can think of G1 as a second order
tensor with dimension ? ? r, Gd as a second order tensor with dimension r ? ?, and Gj for
2 6 j 6 d ? 1 as a third order tensor with dimension r ? ? ? r. Then we can apply the low
d
rank kernel embedding CeX1:d to a set of elements {fi ? F}i=1 as follows CeX1:d ?1 f1 ?2 . . . ?d fd =
>
(G1 ?1 f1 ) (G2 ?2 f2 ) . . . (Gd?1 ?2 fd?1 )(Gd ?2 fd ). Based on the above decomposition, one can
obtain a low rank density estimate by pe(X1 , . . . , Xd ) = CeX1:d ?1 ?(X1 ) ?2 . . . ?d ?(Xd ). We can
also compute the difference between CeX1:d and the operator CX1:d by using the generalized Frobenius
norm kCeX1:d ? CX1:d k? .
6
Kernel Algorithm
n
In practice, we are only provided with a finite number of samples (xi1 , . . . , xid ) i=1 draw i.i.d. from
p(X1 , . . . , Xd ), and we want to obtain an empirical low rank decomposition of the kernel embedding. In this
we will perform
a low rank decomposition of the empirical kernel embedding
Pcase,
n
C?X1:d = n1 i=1 ?dj=1 ?(xij ) . Although the empirical kernel embedding still has infinite dimensions, we will show that we can carry out the decomposition using just the kernel matrices. Let us
denote the kernel matrix for each dimension of the data by Kj where j ? {1, . . . , d}. The (i, i0 )-th
0
0
entry in Kj can be computed as Kjii = k(xij , xij ). Alternatively, one can think of implicitly forming
1
One can readily generalize this notation to decompositions where different reshapings have different ranks.
5
the feature matrix ?j = ?(x1j ), . . . , ?(xnj ) , and the corresponding kernel matrix is Kj = ?>
j ?j .
Furthermore, we denote the tensor feature matrix
formed
from
dimension
j
+
1
to
d
of
the
data
as
?j = ?dj0 =j+1 ?(x1j 0 ), . . . , ?dj0 =j+1 ?(xnj0 ) . The corresponding kernel matrix Lj = ?>
?
with
j
j
Qd
0
0
the (i, i0 )-th entry in Lj defined as Ljii = j 0 =j+1 k(xij 0 , xij 0 ).
Step 1-3 in Algorithm 1. The key building block of the algorithm is a kernel singular value decomposition (Algorithm 2), which we will explain in more details using the example in step 2 of
Algorithm 1. Using the implicitly defined feature matrix, A1 can be expressed as A1 = n1 ?1 ?>
1.
For the low rank approximation, A1 ? Ur Sr Vr> , using singular value decomposition, the leading r singular vector Ur = (u1 , . . . , ur ) will lie in the span of ?1 , i.e., Ur = ?1 (?1 , . . . , ?r )
where ? ? Rn . Then we can transform the singular value decomposition problem for an infinite
dimensional matrix to a generalized eigenvalue problem involving kernel matrices, A1 A1 > u =
1
>
? u ? n12 ?1 ?>
1 ?1 ?1 ?1 ? = ? ?1 ? ? n2 K1 L1 K1 ? = ? K1 ?. Let the Cholesky decomposition of K1 be R> R, then the generalized eigenvalue decomposition problem can be solved by
redefining ?e = R?, and solving an ordinary eigenvalue problem
1
e and obtain ? = R? ?.
e
RL1 R> ?e = ? ?,
(10)
n2
> >
>
e> e
The resulting singular vectors satisfy u>
l ul0 = ?l ?1 ?1 ?l0 = ?l K?l0 = ?l ?l0 = ?ll0 . Then we
>
>
can obtain B1 := Sr Vr = Ur A1 by projecting the column of A1 using the singular vectors Ur ,
1
1
>
1
n
>
(?1 , . . . , ?r )> K1 ?>
(11)
B1 = (?1 , . . . , ?r )> ?>
1 ? 1 ?1 =
1 =: (? , . . . , ? )?1
n
n
where ? ? Rr can be treated as the reduced r-dimensional feature representation for each feature
mapped data point ?(xi1 ). Then we have the first intermediate tensor G1 = Ur = ?1 (?1 , . . . , ?r ) =:
?1 (? 1 , . . . , ? n )> , where ? ? Rr . Then the kernel singular value decomposition can be carried out
recursively on the reshaped tensor B1 .
e 2 ?> , where
Step 5-7 in Algorithm 1. When j = 2, we first reshape B1 = Sr Vr> to obtain A2 = n1 ?
2
e 2 = (? 1 ??(x1 ), . . . , ? n ??(xn )). Then we can carry out similar singular value decomposition as
?
2
2
e 2 (?1 , . . . , ?r ) =: ?
e 2 (? 1 , . . . , ? n )> . Then we have the second operator
before,Pand obtain Ur = ?
n
i
i
i
G2 = i=1 ? ? ?(x2 ) ? ? . Last, we define B2 := Sr Vr> = Ur> A2 as
1
1 1
1
e >?
e >
B2 = (?1 , . . . , ?r )> ?
(?1 , . . . , ?r )> (? ? K2 )?>
(? , . . . , ? n )?>
(12)
2 =:
2,
2 2 ?2 =
n
n
n
and carry out the recursive decomposition further.
The result of the algorithm is an empirical low rank kernel embedding, CbX1:d , represented as a collection of intermediate tensors {G1 , . . . , Gd }. The overall algorithm is summarized in Algorithm 3.
More details about the derivation can be found in Appendix A.
The application of the set of intermediate tensor {G1 , . . . , Gd } to a set of elements {fi ? F} can be
expressed as kernel operations. For instance, we can obtain a density estimate by pb(x1 , . . . , xd ) =
P
CbX1:d ?1 ?(x1 ) ?2 . . . ?d ?(xd ) =
z1 ,...,zd g1 (x1 , z1 )g2 (z1 , x2 , z2 ) . . . gd (zd?1 , xd ) where (see
Appendix A for more details)
Xn
g1 (x1 , z1 ) = G1 ?1 ?(x1 ) ?2 z1 =
(z1> ? i )k(xi1 , x1 )
(13)
i=1
Xn
>
gj (zj?1 , xj , zj ) = Gj ?1 zj?1 ?2 ?(xj ) ?3 zj =
(zj?1
? i )k(xij , xj )(zj> ? i )
(14)
i=1
Xn
>
gd (zd?1 , xd ) = Gd ?1 zd?1 ? xd =
(zd?1
? i )k(xid , xd )
(15)
i=1
In the above formulas, each term is a weighted combination of kernel functions, and the weighting
is determined by the kernel singular value decomposition and the values of the latent variable {zj }.
7
Performance Guarantees
As we mentioned in the introduction, the imposed latent structure used in the low rank decomposition of kernel embeddings may be misspecified, and the decomposition of empirical embeddings
may suffer from sampling error. In this section, we provide finite guarantee for Algorithm 3 even
when the latent structures are misspecified. More specifically, we will bound, in terms of the gen6
Algorithm 2 KernelSVD(K, L, r)
Out: A collection of vectors (? 1 , . . . , ? n )
1: Perform Cholesky decomposition K = R> R
e = ? ?,
e and keep the leading r eigen vectors
2: Solve eigen decomposition n12 RLR> ?
e
e
(?1 , . . . , ?r )
e1 , . . . , ?r = R? ?er , and reorgnaize (? 1 , . . . , ? n )> = (?1 , . . . , ?r )
3: Compute ?1 = R? ?
Algorithm 3 Kernel Low Rank Decomposition of Empirical Embedding C?X1:d
n
In: A sample (xi1 , . . . , xid ) i=1 , desired rank r, a query point (x1 , . . . , xd )
Out: A low rank embedding CbX1:d ? H(T , r) as intermediate operators {G1 , . . . , Gd }
1: Ld = 11>
2: for j = d, d ? 1, . . . , 1 do
0
0
3:
Compute matrix Kj with Kjii = k(xij , xij ); furthermore, if j < d, then Lj = Lj+1 ? Kj+1
4: end for
5: (? 1 , . . . , ? n ) = KernelSVD(K1 , L1 , r)
6: G1 = ?1 (? 1 , . . . , ? n )> , and compute (? 1 , . . . , ? n ) = (? 1 , . . . , ? n )K1
7: for j = 2, . . . , d ? 1 do
1
n >
1
n
1
n
8:
? = (?
Pn, . . . , ? ) (? , . . . , ? ), and compute (? , . . . , ? ) = KernelSVD(Ki ? ?, Li , r)
9:
Gj = i=1 ? i ? ?(xij ) ? ? i , and compute (? 1 , . . . , ? n ) = (? 1 , . . . , ? n )Ki
10: end for
11: Gd = (? 1 , . . . , ? n )?>
d
eralized Frobenius norm kCX1:d ? CbX1:d k? , the difference between the true
and
kernel embeddings
n
the low rank kernel embeddings estimated from a set of n i.i.d. samples (xi1 , . . . , xid ) i=1 . First
we observed that the difference can be decomposed into two terms
(16)
kCX1:d ? CbX1:d k? 6 kCX1:d ? CeX1:d k? + kCeX1:d ? CbX1:d k?
{z
} |
{z
}
|
E1 : model error
E2 : estimation error
where the first term is due to the fact that the latent structures may be misspecified, while the second
term is due to estimation from finite number of data points. We will bound these two sources of
error separately (the proof is deferred to Appendix B)
Theorem 2 Suppose each reshaping CI1 ;I2 of CX1:d
according to an edge
in the latent tree structure has a rank r approximation Ur Sr Vr> with error
CI1 ;I2 ? Ur Sr Vr>
? 6 . Then the low rank
?
decomposition CeX1:d from Algorithm 1 satisfies kCX1:d ? CeX1:d k? 6 d ? 1 .
Although previous work [5, 6] have also used hierarchical decomposition for kernel embeddings,
their decompositions make the strong assumption that the latent tree models are correctly specified.
When the models are misspecified, these algorithms have no guarantees whatsoever, and may fail
drastically as we show in later experiments. In contrast, the decomposition we proposed here are
robust in the sense that even when the latent tree structure is misspecified, we can still provide
the approximation guarantee for the algorithm. Furthermore, when the latent tree structures are
correctly specified and the rank r is also correct, then CI1 ;I2 has rank r and hence = 0 and our
decomposition algorithm does not incur any modeling error.
Next, we provide bound for the the estimation error. The estimation error arises from decomposing
the empirical estimate C?X1:d of the kernel embedding, and the error can accumulate as we combine
intermediate tensors {G1 , . . . , Gd } to form the final low rank kernel embedding. More specifically,
we have the following bound (the proof is deferred to Appendix C)
Theorem 3 Suppose the r-th singular value of each reshaping CI1 ;I2 of CX1:d according to an
edge in the latent tree structure is lower bounded by ?, then with probability at least 1 ? ?, kCeX1:d ?
d?2
d?2
c
CX ? C?X
6 (1+?)
? , with some constant c associated with the
CbX1:d k? ? (1+?)
1:d
1:d ?
?d?2
?d?2 n
kernel and the probability ?.
7
From the above theorem, we can see that the smaller the r-th singular value, the more difficult it is
to estimate the low rank kernel embedding. Although in the bound the error grows exponential in
1/?d?2 , in our experiments, we did not observe such exponential degradation of performance even
in relatively high dimensional datasets.
8
Experiments
Besides the synthetic dataset we showed in Figure 1 where low rank kernel embedding can lead to
significant improvement in term of estimating the density, we also experimented with real world
datasets from UCI data repository. We take 11 datasets with varying dimensions and number of data
points, and the attributes of the datasets are continuous-valued. We whiten the data and compare
low rank kernel embeddings (Low Rank) obtained from Algorithm 3 to 3 other alternatives for
continuous density estimation, namely, mixture of Gaussian with full covariance matrix, ordinary
kernel density estimator (KDE) and the kernel spectral algorithm for latent trees (Spectral) [6]. We
1
use Gaussian kernel k(x, x0 ) = ?2?s
exp(?kx ? x0 k2 /(2s2 )) for KDE, Spectral and our method
(Low rank). We split each dataset into 10 subsets, and use nested cross-validation based on heldout likelihood to choose hyperparameters: the kernel parameter s for KDE, Spectral and Low rank
({2?3 , 2?2 , 2?1 , 1, 2, 4, 8} times the median pairwise distance), the rank parameter r for Spectral
and Low rank (range from 2 to 30), and the number of components in the Gaussian mixture (range
from 2 to # Sample
). For both Spectral and Low rank, we use a caterpillar tree in Figure 2(c) as the
30
structure for the latent variable model.
From Table 1, we can see that low rank kernel embeddings provide the best or comparable held-out
negative log-likelihood across the datasets we experimented with. In some datasets, low rank kernel
embeddings can lead to drastic improvement over the alternatives. For instance, in dataset ?sonar?
and ?yeast?, the improvement is dramatic. The Spectral approach performs even worse sometimes.
This makes sense, since the caterpillar tree supplied to the algorithm may be far away from the
reality and Spectral is not robust to model misspecification. Meanwhile, the Spectral algorithm also
caused numerical problem in practical. In contrast, our method Low Rank uses the same latent
structure, but achieved much more robust results.
Table 1: Negative log-likelihood on held-out data (the lower the better).
Method
Data Set
# Sample Dim. Gaussian mixture
KDE
Spectral
Low rank
australian 690
14
17.97?0.26
18.32?0.64 33.50 ?2.17 15.88?0.11
bupa
345
6
8.17?0.30
8.36?0.17
25.01?0.66
7.57?0.14
german
1000
24
31.14 ? 0.41
30.57 ? 0.15 28.40 ? 11.64 22.89 ? 0.26
heart
270
13
17.72 ?0.23
18.23 ?0.18 21.50 ? 2.39 16.95 ? 0.13
ionosphere 351
34
47.60 ?1.77
43.53 ? 1.25 54.91?1.35 35.84 ? 1.00
pima
768
8
11.78 ? 0.04
10.38 ? 0.19 31.42 ? 2.40 10.07 ? 0.11
parkinsons 195
22
30.13? 0.24
30.65 ? 0.66 33.20 ? 0.70 28.19 ? 0.37
sonar
208
60
107.06 ? 1.36 96.17 ? 0.27 89.26 ? 2.75 57.96 ? 2.67
wpbc
198
33
50.75 ? 1.11
49.48 ? 0.64 48.66 ? 2.56 40.78 ? 0.86
wine
178
13
19.59 ? 0.14
19.56 ? 0.56 19.25 ? 0.58 18.67 ? 0.17
yeast
208
79
146.11 ? 5.36 137.15 ? 1.80 76.58 ? 2.24 72.67 ?4.05
9
Discussion and Conclusion
In this paper, we presented a robust kernel embedding algorithm which can make use of the low
rank structure of the data, and provided both theoretical and empirical support for it. However, there
are still a number of issues which deserve further research. First, the algorithm requires a sequence
of kernel singular decompositions which can be computationally intensive for high dimensional and
large datasets. Developing efficient algorithms yet with theoretical guarantees will be interesting future research. Second, the statistical analysis could be sharpened. For the moment, the analysis does
not seem to suggest that the obtained estimator by our algorithm is better than ordinary KDE. Third,
it will be interesting empirical work to explore other applications for low rank kernel embeddings,
such as kernel two-sample tests, kernel independence tests and kernel belief propagation.
8
References
[1] A. J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In Proceedings of the International Conference on Algorithmic Learning Theory,
volume 4754, pages 13?31. Springer, 2007.
[2] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Injective Hilbert
space embeddings of probability measures. In Proc. Annual Conf. Computational Learning
Theory, pages 111?122, 2008.
[3] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. J. Smola. A kernel
statistical test of independence. In Advances in Neural Information Processing Systems 20,
pages 585?592, Cambridge, MA, 2008. MIT Press.
[4] L. Song, A. Gretton, D. Bickson, Y. Low, and C. Guestrin. Kernel belief propagation. In Proc.
Intl. Conference on Artificial Intelligence and Statistics, volume 10 of JMLR workshop and
conference proceedings, 2011.
[5] L. Song, B. Boots, S. Siddiqi, G. Gordon, and A. J. Smola. Hilbert space embeddings of hidden
markov models. In International Conference on Machine Learning, 2010.
[6] L. Song, A. Parikh, and E.P. Xing. Kernel embeddings of latent tree graphical models. In
Advances in Neural Information Processing Systems, volume 25, 2011.
[7] L. Song, M. Ishteva, H. Park, A. Parikh, and E. Xing. Hierarchical tensor decomposition of
latent tree graphical models. In International Conference on Machine Learning (ICML), 2013.
[8] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer, 2004.
[9] L. Song, J. Huang, A. J. Smola, and K. Fukumizu. Hilbert space embeddings of conditional
distributions. In Proceedings of the International Conference on Machine Learning, 2009.
[10] Tamara. G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review,
51(3):455?500, 2009.
[11] L. Grasedyck. Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix
Analysis and Applications, 31(4):2029?2054, 2010.
[12] I Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295?
2317, 2011.
[13] L. Rosasco, M. Belkin, and E.D. Vito. On learning with integral operators. Journal of Machine
Learning Research, 11:905?934, 2010.
9
| 5080 |@word repository:1 norm:5 nd:2 ci2:2 decomposition:42 covariance:2 pick:1 dramatic:1 recursively:2 ld:1 carry:3 moment:1 contains:1 rkhs:5 xnj:1 current:1 z2:14 si:3 yet:1 dx:2 written:1 readily:3 bd:1 subsequent:1 concatenate:1 partition:2 numerical:1 shape:1 designed:1 plot:1 bickson:1 v:1 intelligence:1 leaf:1 i100:5 combine:1 ex2:1 manner:1 introduce:1 pairwise:1 x0:7 expected:1 ra:1 behavior:1 multi:3 spherical:2 decomposed:1 provided:4 estimating:1 notation:5 insure:1 bounded:1 brett:1 skewness:1 z:10 whatsoever:1 guarantee:6 pseudo:1 xd:21 k2:2 berlinet:1 partitioning:3 appear:1 t1:1 before:1 id:2 approximately:1 studied:1 challenging:3 bupa:1 ishteva:1 range:2 practical:2 practice:3 block:1 recursive:1 x3:7 area:1 empirical:12 significantly:1 projection:1 word:1 suggest:1 synergistic:1 operator:12 context:1 applying:1 map:3 imposed:1 simplicity:2 splitting:1 estimator:7 orthonormal:3 embedding:47 n12:2 rl1:1 oseledets:1 kolda:1 imagine:1 play:1 suppose:2 us:1 lanckriet:1 element:6 observed:6 role:1 solved:1 commonplace:1 connected:1 ordering:1 removed:2 mentioned:1 intuition:1 ui:2 vito:1 solving:1 algebra:1 incur:1 f2:1 basis:5 joint:5 eit:1 various:4 represented:2 derivation:1 train:1 effective:1 query:1 artificial:1 quite:1 whose:1 valued:3 solve:1 statistic:2 g1:16 reshaped:3 think:2 transform:1 final:1 sequence:2 rr:4 eigenvalue:3 propose:1 product:11 remainder:1 uci:1 mixing:1 frobenius:5 olkopf:3 cluster:3 intl:1 produce:1 illustrate:2 fixing:2 x0i:3 lsong:1 strong:1 implies:3 australian:1 qd:3 direction:1 correct:1 attribute:1 bader:1 xid:4 require:1 f1:3 generalization:2 ci1:18 decompose:2 practically:1 considered:1 ground:5 exp:1 mapping:1 bj:3 algorithmic:1 smallest:1 a2:2 wine:1 estimation:7 proc:2 label:1 teo:1 tool:1 weighted:1 fukumizu:3 mit:1 gaussian:4 pn:1 parkinson:1 factorizes:3 varying:1 gatech:2 l0:3 focus:2 improvement:3 rank:73 prevalent:2 likelihood:4 visibly:1 hk:1 contrast:2 sense:3 dim:1 i0:2 integrated:1 lj:4 hidden:3 relation:1 i1:10 issue:2 overall:1 denoted:2 equal:1 once:1 sampling:1 biology:1 represents:1 x4:4 park:1 icml:1 future:1 gordon:1 belkin:1 modern:1 n1:3 fd:5 evaluation:1 deferred:2 mixture:5 tj:3 held:3 nonincreasing:1 subtrees:1 edge:7 closer:2 integral:1 injective:1 tree:29 indexed:1 desired:2 theoretical:4 fitted:1 instance:6 column:3 modeling:1 ordinary:7 entry:2 subset:1 dependency:1 synthetic:1 gd:14 st:1 density:22 international:4 siam:3 systematic:1 xi1:5 eralized:1 together:1 sharpened:1 choose:1 huang:1 rosasco:1 worse:1 conf:1 leading:4 li:1 account:2 b2:2 summarized:1 satisfy:1 caused:1 vi:2 later:3 view:4 xing:2 hf:1 collaborative:1 contribution:1 square:1 formed:1 pand:1 largely:1 likewise:1 conceptually:1 generalize:1 cc:1 explain:2 sriperumbudur:1 tamara:1 e2:1 proof:3 di:4 dxi:1 associated:1 sampled:1 dataset:3 intrinsically:1 hilbert:7 x1j:2 higher:4 improved:1 evaluated:1 furthermore:7 just:1 implicit:2 smola:4 ei:2 propagation:3 mode:5 aj:3 yeast:2 scientific:1 grows:1 believe:1 building:1 effect:1 true:1 hence:4 i2:27 ex1:6 please:1 whiten:1 generalized:8 performs:1 l1:2 interface:1 rlr:1 ranging:1 meaning:1 novel:2 fi:5 parikh:2 misspecified:8 conditioning:1 volume:4 kluwer:1 accumulate:1 measurement:1 significant:1 cambridge:1 rd:1 similarly:1 dj:1 gj:6 multivariate:3 recent:2 showed:1 reverse:1 manipulation:1 certain:1 exploited:2 guestrin:1 dai:1 additional:1 impose:1 full:1 reduces:2 gretton:4 smooth:1 cross:3 reshaping:4 ez1:2 e1:2 a1:12 involving:1 vision:1 essentially:3 expectation:2 kernel:97 represent:1 sometimes:1 achieved:2 want:1 separately:1 addressed:1 singular:18 source:1 median:1 modality:1 sch:3 unlike:1 sr:15 induced:2 seem:1 presence:3 intermediate:8 split:1 embeddings:36 independence:6 xj:8 finish:1 ezs:2 bandwidth:1 topology:1 inner:8 idea:1 reduce:1 intensive:1 expression:1 bridging:1 song:8 suffer:1 matlab:1 useful:1 extensively:1 induces:1 siddiqi:1 reduced:1 supplied:1 xij:9 zj:13 estimated:2 arising:1 disjoint:1 correctly:2 zd:6 discrete:3 group:4 key:3 four:1 dj0:2 pb:1 v1:1 family:1 throughout:1 draw:2 appendix:4 comparable:1 bound:5 ki:2 annual:1 handful:1 caterpillar:7 x2:20 u1:2 argument:3 span:1 relatively:1 structured:1 developing:1 according:8 combination:1 beneficial:1 smaller:2 across:1 character:1 ur:20 making:1 s1:1 intuitively:1 projecting:1 taken:2 heart:1 computationally:1 remains:1 turn:1 german:1 mechanism:1 fail:2 drastic:1 end:3 operation:4 gaussians:2 decomposing:1 apply:1 observe:1 hierarchical:7 away:1 spectral:11 ernel:1 generic:1 reshape:12 i10:2 v2:1 alternative:3 eigen:2 original:1 thomas:1 assumes:1 graphical:6 exploit:1 k1:7 tensor:41 objective:1 sweep:1 eid:3 distance:2 separate:2 mapped:1 ei1:4 code:1 besides:1 demonstration:1 difficult:1 ex3:1 potentially:3 kde:12 pima:1 negative:2 design:1 zt:13 perform:4 boot:1 observation:1 markov:2 datasets:7 implementable:1 finite:3 misspecification:3 rn:1 reproducing:3 pair:1 namely:1 specified:2 z1:19 redefining:1 address:3 dth:1 able:1 proceeds:1 deserve:1 agnan:1 regime:1 including:2 belief:3 treated:1 endows:1 indicator:3 scarce:1 improve:1 technology:1 carried:1 kj:5 prior:1 literature:2 review:1 grasedyck:1 heldout:1 interesting:2 limitation:1 filtering:1 facing:1 validation:2 wpbc:1 last:2 drastically:2 allow:1 institute:1 taking:2 dimension:11 xn:4 world:2 rich:1 contour:2 reside:1 collection:3 far:1 approximate:3 implicitly:3 cutting:1 gene:1 keep:1 instantiation:1 b1:6 ul0:1 xi:8 alternatively:2 continuous:4 latent:41 sonar:2 table:2 reality:1 robust:8 complex:1 meanwhile:1 domain:3 diag:1 did:1 cx2:1 s2:2 hyperparameters:1 n2:2 x1:44 georgia:1 vr:15 exponential:2 lie:1 pe:1 jmlr:1 third:2 weighting:1 theorem:5 formula:1 embed:1 xt:2 er:1 x:4 experimented:2 ionosphere:1 evidence:1 essential:1 intrinsic:1 workshop:1 restricting:1 subtree:2 kx:1 cx:5 led:2 simply:2 explore:1 forming:1 ez:3 ll0:1 expressed:2 ordered:1 g2:3 bo:1 u2:1 springer:1 nested:2 truth:5 satisfies:3 corresponds:1 ma:1 conditional:12 viewed:6 goal:1 consequently:2 exposition:1 infinite:8 specifically:3 determined:1 degradation:1 rarely:2 exception:1 college:1 support:3 cholesky:2 latter:1 arises:1 ex:4 |
4,510 | 5,081 | B-tests: Low Variance Kernel Two-Sample Tests
Matthew Blaschko
Arthur Gretton
Wojciech Zaremba
?
Gatsby Unit
Center for Visual Computing
Equipe
GALEN
?
University College London
Inria Saclay
Ecole
Centrale Paris
United Kingdom
Ch?atenay-Malabry, France
Ch?atenay-Malabry, France
{woj.zaremba,arthur.gretton}@gmail.com, [email protected]
Abstract
A family of maximum mean discrepancy (MMD) kernel two-sample tests is introduced. Members of the test family are called Block-tests or B-tests, since the test
statistic is an average over MMDs computed on subsets of the samples. The choice
of block size allows control over the tradeoff between test power and computation
time. In this respect, the B-test family combines favorable properties of previously proposed MMD two-sample tests: B-tests are more powerful than a linear
time test where blocks are just pairs of samples, yet they are more computationally efficient than a quadratic time test where a single large block incorporating all
the samples is used to compute a U-statistic. A further important advantage of the
B-tests is their asymptotically Normal null distribution: this is by contrast with
the U-statistic, which is degenerate under the null hypothesis, and for which estimates of the null distribution are computationally demanding. Recent results on
kernel selection for hypothesis testing transfer seamlessly to the B-tests, yielding
a means to optimize test power via kernel choice.
1
Introduction
Given two samples {xi }ni=1 where xi ? P i.i.d., and {yi }ni=1 , where yi ? Q i.i.d, the two sample
problem consists in testing whether to accept or reject the null hypothesis H0 that P = Q, vs the
alternative hypothesis HA that P and Q are different. This problem has recently been addressed
using measures of similarity computed in a reproducing kernel Hilbert space (RKHS), which apply
in very general settings where P and Q might be distributions over high dimensional data or structured objects. Kernel test statistics include the maximum mean discrepancy [10, 6] (of which the
energy distance is an example [18, 2, 22]), which is the distance between expected features of P and
Q in the RKHS; the kernel Fisher discriminant [12], which is the distance between expected feature
maps normalized by the feature space covariance; and density ratio estimates [24]. When used in
testing, it is necessary to determine whether the empirical estimate of the relevant similarity measure is sufficiently large as to give the hypothesis P = Q low probability; i.e., below a user-defined
threshold ?, denoted the test level. The test power denotes the probability of correctly rejecting the
null hypothesis, given that P 6= Q.
The minimum variance unbiased estimator MMDu of the maximum mean discrepancy, on the basis
of n samples observed from each of P and Q, is a U-statistic, costing O(n2 ) to compute. Unfortunately, this statistic is degenerate under the null hypothesis H0 that P = Q, and its asymptotic
distribution takes the form of an infinite weighted sum of independent ?2 variables (it is asymptotically Gaussian under the alternative hypothesis HA that P 6= Q). Two methods for empirically
estimating the null distribution in a consistent way have been proposed: the bootstrap [10], and a
method requiring an eigendecomposition of the kernel matrices computed on the merged samples
from P and Q [7]. Unfortunately, both procedures are computationally demanding: the former costs
O(n2 ), with a large constant (the MMD must be computed repeatedly over random assignments
of the pooled data); the latter costs O(n3 ), but with a smaller constant, hence can in practice be
1
faster than the bootstrap. Another approach is to approximate the null distribution by a member
of a simpler parametric family (for instance, a Pearson curve approximation), however this has no
consistency guarantees.
More recently, an O(n) unbiased estimate MMDl of the maximum mean discrepancy has been proposed [10, Section 6], which is simply a running average over independent pairs of samples from P
and Q. While this has much greater variance than the U-statistic, it also has a simpler null distribution: being an average over i.i.d. terms, the central limit theorem gives an asymptotically Normal
distribution, under both H0 and HA . It is shown in [9] that this simple asymptotic distribution makes
it easy to optimize the Hodges and Lehmann asymptotic relative efficiency [19] over the family of
kernels that define the statistic: in other words, to choose the kernel which gives the lowest Type II
error (probability of wrongly accepting H0 ) for a given Type I error (probability of wrongly rejecting H0 ). Kernel selection for the U-statistic is a much harder question due to the complex form of
the null distribution, and remains an open problem.
It appears that MMDu and MMDl fall at two extremes of a spectrum: the former has the lowest
variance of any n-sample estimator, and should be used in limited data regimes; the latter is the
estimator requiring the least computation while still looking at each of the samples, and usually
achieves better Type II error than MMDu at a given computational cost, albeit by looking at much
more data (the ?limited time, unlimited data? scenario). A major reason MMDl is faster is that its
null distribution is straightforward to compute, since it is Gaussian and its variance can be calculated
at the same cost as the test statistic. A reasonable next step would be to find a compromise between
these two extremes: to construct a statistic with a lower variance than MMDl , while retaining an
asymptotically Gaussian null distribution (hence remaining faster than tests based on MMDu ). We
study a family of such test statistics, where we split the data into blocks of size B, compute the
quadratic-time MMDu on each block, and then average the resulting statistics. We call the resulting
tests B-tests. As long as we choose the size B of blocks such that n/B ? ?, we are still guaranteed
asymptotic Normality by the central limit theorem, and the null distribution can be computed at the
same cost as the test statistic. For a given sample size n, however, the power of the test can increase
dramatically over the MMDl test, even for moderate block sizes B, making much better use of the
available data with only a small increase in computation.
The block averaging scheme was originally proposed in [13], as an instance of a two-stage Ustatistic, to be applied when the degree of degeneracy of the U-statistic is indeterminate. Differences
with respect to our method are that Ho and Shieh compute the block statistics by sampling with
replacement [13, (b) p. 863], and propose to obtain the variance of the test statistic via Monte
Carlo, jackknife, or bootstrap techniques, whereas we use closed form expressions. Ho and Shieh
further suggest an alternative two-stage U-statistic in the event that the degree of degeneracy is
known; we return to this point in the discussion. While we confine ourselves to the MMD in this
paper, we emphasize that the block approach applies to a much broader variety of test situations
where the null distribution cannot easily be computed, including the energy distance and distance
covariance [18, 2, 22] and Fisher statistic [12] in the case of two-sample testing, and the HilbertSchmidt Independence Criterion [8] and distance covariance [23] for independence testing. Finally,
the kernel learning approach of [9] applies straightforwardly, allowing us to maximize test power
over a given kernel family. Code is available at http://github.com/wojzaremba/btest.
2
Theory
In this section we describe the mathematical foundations of the B-test. We begin with a brief review
of kernel methods, and of the maximum mean discrepancy. We then present our block-based average
MMD statistic, and derive its distribution under the H0 (P = Q) and HA (P 6= Q) hypotheses. The
central idea employed in the construction of the B-test is to generate a low variance MMD estimate
by averaging multiple low variance kernel statistics computed over blocks of samples. We show
simple sufficient conditions on the block size for consistency of the estimator. Furthermore, we
analyze the properties of the finite sample estimate, and propose a consistent strategy for setting the
block size as a function of the number of samples.
2.1
Definition and asymptotics of the block-MMD
Let Fk be an RKHS defined on a topological space X with reproducing kernel k, and P a Borel
probability measure on X . The mean embedding of P in Fk , written ?k (p) ? Fk is defined such
2
250
HA histogram
250
H0 histogram
HA histogram
approximated 5% quantile of H0
H histogram
0
approximated 5% quantile of H
200
0
200
150
150
100
100
50
50
0
?4
0
?0.05
?0.04
?0.03
?0.02
?0.01
0
0.01
0.02
0.03
0.04
?2
0
2
4
6
8
10
?3
x 10
0.05
(a) B = 2. This setting corresponds to the MMDl
statistic [10].
(b) B = 250
Figure 1: Empirical distributions under H0 and HA for different regimes of B for the music experiment
(Section 3.2). In both plots, the number of samples is fixed at 500. As we vary B, we trade off the quality of the
finite sample Gaussian approximation to the null distribution, as in Theorem 2.3, with the variances of the H0
and HA distributions, as outlined in Section 2.1. In (b) the distribution under H0 does not resemble a Gaussian
(it does not pass a level 0.05 Kolmogorov-Smirnov (KS) normality test [16, 20]), and a Gaussian approximation
results in a conservative test threshold (vertical green line). The remaining empirical distributions all pass a KS
normality test.
that Ex?p f (x) = hf, ?k (p)iFk for all f ? Fk , and exists for all Borel probability measures when
k is bounded and continuous [3, 10]. The maximum mean discrepancy (MMD) between a Borel
probability measure P and a second Borel probability measure Q is the squared RKHS distance
between their respective mean embeddings,
2
?k (P, Q) = k?k (P ) ? ?k (Q)kFk = Exx0 k(x, x0 ) + Eyy0 k(y, y 0 ) ? 2Exy k(x, y),
(1)
0
where x denotes an independent copy of x [11]. Introducing the notation z = (x, y), we write
?k (P, Q) = Ezz0 hk (z, z 0 ),
h(z, z 0 ) = k(x, x0 ) + k(y, y 0 ) ? k(x, y 0 ) ? k(x0 , y).
(2)
When the kernel k is characteristic, then ?k (P, Q) = 0 iff P = Q [21]. Clearly, the minimum
variance unbiased estimate MMDu of ?k (P, Q) is a U-statistic.
By analogy with MMDu , we make use of averages of h(x, y, x0 , y 0 ) to construct our two-sample
test. We denote by ??k (i) the ith empirical estimate MMDu based on a subsample of size B, where
n
1?i? B
(for notational purposes, we will index samples as though they are presented in a random
fixed order). More precisely,
??k (i) =
1
B(B ? 1)
iB
X
iB
X
h(za , zb ).
(3)
a=(i?1)B+1 b=(i?1)B+1,b6=a
The B-test statistic is an MMD estimate obtained by averaging the ??k (i). Each ??k (i) under H0
converges to an infinite sum of weighted ?2 variables [7]. Although setting B = n would lead to the
lowest variance estimate of the MMD, computing sound thresholds for a given p-value is expensive,
involving repeated bootstrap sampling [5, 14], or computing the eigenvalues of a Gram matrix [7].
In contrast, we note that ??k (i)i=1,..., n are i.i.d. variables, and averaging them allows us to apply
B
the central limit theorem in order to estimate p-values from a normal distribution. We denote the
average of the ??k (i) by ??k ,
n
B
BX
??k =
??k (i).
(4)
n i=1
We would like to apply the central limit theorem to variables ??k (i)i=1,..., n . It remains for us to
B
derive the distribution of ??k under H0 and under HA . We rely on the result from [11, Theorem 8]
for HA . According to our notation, for every i,
3
Theorem 2.1 Assume 0 < E(h2 ) < ?, then under HA , ??k converges in distribution to a Gaussian
according to
D
1
B 2 (?
?k (i) ? MMD2 ) ? N (0, ?u2 ),
where ?u2 = 4 Ez [(Ez0 h(z, z 0 ))2 ? Ez,z0 (h(z, z 0 ))]2 .
(5)
This in turn implies that
D
??k (i) ? N (MMD2 , ?u2 B ?1 ).
For an average of {?
?k (i)}i=1,..., Bn , the central limit theorem implies that under HA ,
D
?1
??k ? N MMD2 , ?u2 (Bn/B)
= N MMD2 , ?u2 n?1 .
(6)
(7)
This result shows that the distribution of HA is asymptotically independent of the block size, B.
Turning to the null hypothesis, [11, Theorem 8] additionally implies that under H0 for every i,
Theorem 2.2
D
B ??k (i) ?
?
X
?l [zl2 ? 2],
(8)
l=1
where zl ? N (0, 2)2 i.i.d, ?l are the solutions to the eigenvalue equation
Z
? x0 )?l (x)dp(x) = ?l ?l (x0 ),
k(x,
(9)
X
? i , xj ) := k(xi , xj ) ? Ex k(xi , x) ? Ex k(x, xj ) + Ex,x0 k(x, x0 ) is the centered RKHS kernel.
and k(x
P?
As a consequence, under H0 , ??k (i) has expected variance 2B ?2 l=1 ?2 . We will denote this
variance by CB ?2 . The central limit theorem implies that under H0 ,
?1
D
??k ? N 0, C B 2 n/B
= N 0, C(nB)?1
(10)
The asymptotic distributions for ??k under H0 and HA are Gaussian, and consequently it is easy
to calculate the distribution quantiles and test thresholds. Asymptotically, it is always beneficial to
increase B, as the distributions for ? under H0 and HA will be better separated. For consistency, it
is sufficient to ensure that n/B ? ?.
A related strategy of averaging over data blocks to deal with large sample sizes has recently been
developed in [15], with the goal of efficiently computing bootstrapped estimates of statistics of
interest (e.g. quantiles or biases). Briefly, the approach splits the data (of size n) into s subsamples
each of size B, computes an estimate of the n-fold bootstrap on each block, and averages these
estimates. The difference with respect to our approach is that we use the asymptotic distribution
of the average over block statistics to determine a threshold for a hypothesis test, whereas [15] is
concerned with proving the consistency of a statistic obtained by averaging over bootstrap estimates
on blocks.
2.2
Convergence of Moments
In this section, we analyze the convergence of the moments of the B-test statistic, and comment on
potential sources of bias.
?k (i)).
The central limit theorem implies that the empirical mean of {?
?k (i)}i=1,..., Bn converges to E(?
2
2
n
Moreover it states that the variance {?
?k (i)}i=1,..., B converges to E(?
?k (i)) ? E(?
?k (i) ). Finally, all
remaining moments tend to zero, where the rate of convergence for the jth moment is of the order
j+1
n
2
[1]. This indicates that the skewness dominates the difference of the distribution from a
B
Gaussian.
4
Under both H0 and HA , thresholds computed from normal distribution tables are asymptotically unbiased. For finite samples sizes, however, the bias under H0 can be more severe. From Equation (8)
we have that under H0 , the summands, ??k (i), converge in distribution to infinite weighted sums of
?2 distributions. Every unweighted term of this infinite sum has distribution N (0, 2)2 , which has
finite skewness equal to 8. The skewness for the entire sum is finite and positive,
?
X
C=
8?3l ,
(11)
l=1
as ?l ? 0 for all l due to the positive definiteness of the kernel k. The skew for the mean of the
??k (i) converges to 0 and is positively biased. At smaller sample sizes, test thresholds obtained from
the standard Normal table may therefore be inaccurate, as they do not account for this skew. In our
experiments, this bias caused the tests to be overly conservative, with lower Type I error than the
design level required (Figures 2 and 5).
2.3
Finite Sample Case
In the finite sample case, we apply the Berry-Ess?een theorem, which gives conservative bounds on
the `? convergence of a series of finite sample random variables to a Gaussian distribution [4].
2
2
Theorem 2.3 Let X1 , X2 , . . . , Xn be i.i.d. variables. E(X
> 0, and
1 ) = 0, E(X1 ) = ?
Pn
Xi
3
i=1
E(|X1 | ) = ? < ?. Let Fn be a cumulative distribution of ?n? , and let ? denote the standard
normal distribution. Then for every x,
|Fn (x) ? ?(x)| ? C?? ?3 n?1/2 ,
(12)
where C < 1.
This result allows us to ensure fast point-wise convergence of the B-test. We have that ?(?
?k ) =
O(1), i.e., it is dependent only on the underlying distributions of the samples and not on the sample
size. The number of i.i.d. samples is nB ?1 . Based on Theorem 2.3, the point-wise error can be
O(1)
B2
upper bounded by
= O( ?
) under HA . Under H0 , the error can be bounded by
3?
n
n
?1
O(B
O(1)
3
O(B ?2 ) 2
?n =
)2
B
3.5
O( B?n ).
B
While the asymptotic results indicate that convergence to an optimal predictor is fastest for larger
B, the finite sample results support decreasing the size of B in order to have a sufficient number
n
of samples for application of the central limit theorem. As long as B ? ? and B
? ?, the
assumptions of the B-test are fulfilled.
By varying B, we make a fundamental tradeoff in the construction of our two sample test. When B
is small, we have many samples, hence the null distribution is close to the asymptotic limit provided
by the central limit theorem, and the Type I error is estimated accurately. The disadvantage of a
small B is a lower test power for a given sample size. Conversely, if we increase B, we will have
a lower variance empirical distribution for H0 , hence higher test power, but we may have a poor
estimate of the number of Type I errors (Figure 1). A sensible family of heuristics therefore is to set
B = [n? ]
(13)
for some 0 < ? < 1, where we round to the nearest integer. In this setting the number of samples
(1??)
available for application of the central
]. For given ? computational
limit theorem will be [n
1+?
complexity of the B-test is O n
. We note that any value of ? ? (0, 1) yields a consistent
1
estimator.
We have chosen ? = 2 in the experimental results section, with resulting complexity
1.5
O n
: we emphasize that this is a heuristic, and just one choice that fulfils our assumptions.
3
Experiments
We have conducted experiments on challenging synthetic and real datasets in order to empirically
measure (i) sample complexity, (ii) computation time, and (iii) Type I / Type II errors. We evaluate
B-test performance in comparison to the MMDl and MMDu estimators, where for the latter we
compare across different strategies for null distribution quantile estimation.
5
Method
Kernel parameters
?=1
B-test
? = median
multiple kernels
Pearson curves
Gamma approximation
Gram matrix spectrum
Bootstrap
Pearson curves
Gamma approximation
Gram matrix spectrum
Bootstrap
Additional
parameters
B=2
B =?8
B= n
any B
B=2
B =p8
B = n2
Minimum number
of samples
26400
3850
886
> 60000
37000
5400
1700
Computation
time (s)
0.0012
0.0039
0.0572
B=n
186
183
186
190
387.4649
0.2667
407.3447
129.4094
?=1
Consistent
X
X
X
X
X
X
X
0.0700
0.1295
0.8332
?
?
X
X
?
?
X
X
> 60000, or 2h
per iteration
timeout
? = median
Table 1: Sample complexity for tests on the distributions described in Figure 3. The fourth column indicates
the minimum number of samples necessary to achieve Type I and Type II errors of 5%. The fifth column is the
computation time required for 2000 samples, and is not presented for settings that have unsatisfactory sample
complexity.
0.08
0.08
Empirical Type I error
Expected
? Type I error
B= n
0.07
0.08
Empirical Type I error
Expected
? Type I error
B= n
0.07
0.06
0.06
0.05
0.05
0.05
0.04
0.03
0.04
0.03
0.02
0.02
0.01
0.01
0
2
4
8
16
32
Size of inner block
(a)
64
128
Type I error
0.06
Type I error
Type I error
0.07
0
2
Empirical Type I error
Expected
p Type I error
B = n2
0.04
0.03
0.02
0.01
4
8
16
32
Size of inner block
(b)
64
128
0
2
4
8
16
32
Size of inner block
64
128
(c)
Figure 2: Type I errors on the distributions shown in Figure 3 for ? = 5%: (a) MMD, single kernel, ? = 1, (b)
MMD, single kernel, ? set to the median pairwise distance, and (c) MMD, non-negative linear combination of
multiple kernels. The experiment was repeated 30000 times. Error bars are not visible at this scale.
3.1
Synthetic data
Following previous work on kernel hypothesis testing [9], our synthetic distributions are 5 ? 5 grids
of 2D Gaussians. We specify two distributions, P and Q. For distribution P each Gaussian has
identity covariance matrix, while for distribution Q the covariance is non-spherical. Samples drawn
from P and Q are presented in Figure 3. These distributions have proved to be very challenging for
existing non-parametric two-sample tests [9].
We employed three different kernel selection strategies
in the hypothesis test. First, we used a Gaussian kernel
with ? = 1, which approximately matches the scale of
the variance of each Gaussian in mixture P . While this
is a somewhat arbitrary default choice, we selected it as
it performs well in practice (given the lengthscale of the
(a) Distribution P
(b) Distribution Q
data), and we treat it as a baseline. Next, we set ? equal
to the median pairwise distance over the training data, Figure 3: Synthetic data distributions P and
which is a standard way to choose the Gaussian kernel Q. Samples belonging to these classes are
bandwidth [17], although it is likewise arbitrary in this difficult to distinguish.
context. Finally, we applied a kernel learning strategy, in
which the kernel was optimized to maximize the test power for the alternative P 6= Q [9]. This
approach returned a non-negative linear combination combination of base kernels, where half the
data were used in learning the kernel weights (these data were excluded from the testing phase).
The base kernels in our experiments were chosen to be Gaussian, with bandwidths in the set ? ?
{2?15 , 2?14 , . . . , 210 }. Testing was conducted using the remaining half of the data.
6
B?test, a single kernel, ? = 1
B?test, a single kernel, ? = median
B?test kernel selection
Tests estimating MMDu with ?=1
Tests estimating MMDu with ?=median
Emprical number of Type II errors
For comparison with the quadratic time U statistic MMDu [7, 10], we evaluated four
null distribution estimates: (i) Pearson curves,
(ii) gamma approximation, (iii) Gram matrix
spectrum, and (iv) bootstrap. For methods using Pearson curves and the Gram matrix spectrum, we drew 500 samples from the null distribution estimates to obtain the 1 ? ? quantiles,
for a test of level ?. For the bootstrap, we fixed
the number of shuffles to 1000. We note that
Pearson curves and the gamma approximation
are not statistically consistent. We considered
only the setting with ? = 1 and ? set to the
median pairwise distance, as kernel selection is
not yet solved for tests using MMDu [9].
1
0.8
0.6
0.4
0.2
0
1
10
2
10
Size of inner block
3
10
Figure 4: Synthetic experiment: number of Type II er-
In the first experiment we set the Type I error to rors vs B, given a fixed probability ? of Type I erbe 5%, and we recorded the Type II error. We rors. As B grows, the Type II error drops quickly when
conducted these experiments on 2000 samples the kernel is appropriately chosen. The kernel selecover 1000 repetitions, with varying block size, tion method is described in [9], and closely approxB. Figure 4 presents results for different kernel imates the baseline performance of the well-informed
choice strategies, as a function of B. The me- user choice of ? = 1.
dian heuristic performs extremely poorly in this
experiment. As discussed in [9, Section 5], the reason for this failure is that the lengthscale of the
difference between the distributions P and Q differs from the lengthscale of the main data variation
as captured by the median, which gives too broad a kernel for the data.
In the second experiment, our aim was to compare the empirical sample complexity of the various
methods. We again fixed the same Type I error for all methods, but this time we also fixed a Type
II error of 5%, increasing the number of samples until the latter error rate was achieved. Column
four of Table 1 shows the number of samples required in each setting to achieve these error rates.
We additionally compared the computational efficiency of the various methods. The computation
time for each method with a fixed sample size of 2000 is presented in column five of Table 1. All
experiments were run on a single 2.4 GHz core.
Finally, we evaluated the empirical Type I error for ? = 5% and increasing B. Figure 2 displays the
empirical Type I error, where we note the location of the ? = 0.5 heuristic in Equation (13). For the
user-chosen kernel (? = 1, Figure 2(a)), the number of Type I errors closely matches the targeted
test level. When median heuristic is used, however, the test is overly conservative, and makes fewer
Type I errors than required (Figure 2(b)). This indicates that for this choice of ?, we are not in the
asymptotic regime, and our Gaussian null distribution approximation is inaccurate. Kernel selection
via the strategy of [9] alleviates this problem (Figure 2(c)). This setting coincides with a block size
substantially larger than 2 (MMDl ), and therefore achieves lower Type II errors while retaining the
targeted Type I error.
3.2
Musical experiments
In this set of experiments, two amplitude modulated Rammstein songs were compared (Sehnsucht
vs. Engel, from the album Sehnsucht). Following the experimental setting in [9, Section 5], samples
from P and Q were extracts from AM signals of time duration 8.3 ? 10?3 seconds in the original
audio. Feature extraction was identical to [9], except that the amplitude scaling parameter
? was set
to 0.3 instead of 0.5. As the feature vector had size 1000 we set the block size B =
1000 =
32. Table 2 summarizes the empirical Type I and Type II errors over 1000 repetitions, and the
average computation times. Figure 5 shows the average number of Type I errors as a function of
B: in this case, all kernel selection strategies result in conservative tests (lower Type I error than
required), indicating that more samples are needed to reach the asymptotic regime. Figure 1 shows
the empirical H0 and HA distributions for different B.
4
Discussion
We have presented experimental results both on a difficult synthetic problem, and on real-world data
from amplitude modulated audio recordings. The results show that the B-test has a much better
7
Kernel
parameters
Method
Additional
parameters
B =?2
B= n
B =?2
B= n
B =p2
B = n2
?=1
B-test
? = median
multiple kernels
Gram matrix spectrum
Bootstrap
Gram matrix spectrum
Bootstrap
Type I error
Type II error
0.038
0.006
0.043
0.026
0.0481
0.025
0.927
0.597
0.786
0
0.867
0.012
Computational
time (s)
0.039
1.276
0.047
1.259
0.607
18.285
0
0.01
0
0.01
0
0
0
0
160.1356
121.2570
286.8649
122.8297
?=1
B = 2000
? = median
Table 2: A comparison of consistent tests on the music experiment described in Section 3.2. Here computation
time is reported for the test achieving the stated error rates.
0.08
0.08
Empirical Type I error
Expected
? Type I error
B= n
0.07
0.08
Empirical Type I error
Expected
? Type I error
B= n
0.07
0.06
0.06
0.05
0.05
0.05
0.04
0.03
Type I error
0.06
Type I error
Type I error
0.07
0.04
0.03
0.04
0.03
0.02
0.02
0.02
0.01
0.01
0.01
0
2
4
8
16
32
Size of inner block
(a)
64
128
0
2
4
8
16
32
Size of inner block
(b)
64
128
Empirical Type I error
Expected
p Type I error
B = n2
0
2
4
8
16
32
Size of inner block
64
128
(c)
Figure 5: Empirical Type I error rate for ? = 5% on the music data (Section 3.2). (a) A single kernel test with
? = 1, (b) A single kernel test with ? = median, and (c) for multiple kernels. Error bars are not visible at this
scale. The results broadly follow the trend visible from the synthetic experiments.
sample complexity than MMDl over all tested kernel selection strategies. Moreover, it is an order
of magnitude faster than any test that consistently estimates the null distribution for MMDu (i.e.,
the Gram matrix eigenspectrum and bootstrap estimates): these estimates are impractical at large
sample sizes, due to their computational complexity. Additionally, the B-test remains statistically
consistent, with the best convergence rates achieved for large B. The B-test combines the best
features of MMDl and MMDu based two-sample tests: consistency, high statistical efficiency, and
high computational efficiency.
A number of further interesting experimental trends may be seen in these results. First, we have
observed that the empirical Type I error rate is often conservative, and is less than the 5% targeted
by the threshold based on a Gaussian null distribution assumption (Figures 2 and 5). In spite of this
conservatism, the Type II performance remains strong (Tables 1 and 2), as the gains in statistical
power of the B-tests improve the testing performance (cf. Figure 1). Equation (7) implies that the
size of B does not influence the asymptotic variance under HA , however we observe in Figure 1 that
the empirical variance of HA drops with larger B. This is because, for these P and Q and small B,
the null and alternative distributions have considerable overlap. Hence, given the distributions are
effectively indistinguishable at these sample sizes n, the variance of the alternative distribution as a
function of B behaves more like that of H0 (cf. Equation (10)). This effect will vanish as n grows.
Finally, [13] propose an alternative approach for U-statistic based testing when the degree of degeneracy is known: a new U-statistic (the TU-statistic) is written in terms of products of centred
U-statistics computed on the individual blocks, and a test is formulated using this TU-statistic. Ho
and Shieh show that a TU-statistic based test can be asymptotically more powerful than a test using
a single U-statistic on the whole sample, when the latter is degenerate under H0 , and nondegenerate
under HA . It is of interest to apply this technique to MMD-based two-sample testing.
Acknowledgments We thank Mladen Kolar for helpful discussions. This work is partially funded by ERC
Grant 259112, and by the Royal Academy of Engineering through the Newton Alumni Scheme.
8
References
[1] Bengt Von Bahr. On the convergence of moments in the central limit theorem. The Annals of
Mathematical Statistics, 36(3):pp. 808?818, 1965.
[2] L. Baringhaus and C. Franz. On a new multivariate two-sample test. J. Multivariate Anal.,
88:190?206, 2004.
[3] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer, 2004.
[4] Andrew C Berry. The accuracy of the gaussian approximation to the sum of independent
variates. Transactions of the American Mathematical Society, 49(1):122?136, 1941.
[5] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, 1993.
[6] M. Fromont, B. Laurent, M. Lerasle, and P. Reynaud-Bouret. Kernels based tests with nonasymptotic bootstrap approaches for two-sample problems. In COLT, 2012.
[7] A Gretton, K Fukumizu, Z Harchaoui, and BK Sriperumbudur. A fast, consistent kernel twosample test. In Advances in Neural Information Processing Systems 22, pages 673?681, 2009.
[8] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. J. Smola. A kernel
statistical test of independence. In Advances in Neural Information Processing Systems 20,
pages 585?592, Cambridge, MA, 2008. MIT Press.
[9] A Gretton, B Sriperumbudur, D Sejdinovic, H Strathmann, S Balakrishnan, M Pontil, and
K Fukumizu. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural
Information Processing Systems 25, pages 1214?1222, 2012.
[10] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch?olkopf, and Alexander
Smola. A kernel two-sample test. J. Mach. Learn. Res., 13:723?773, March 2012.
[11] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch?olkopf, and Alexander J.
Smola. A kernel method for the two-sample-problem. In NIPS, pages 513?520, 2006.
[12] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis. In NIPS, pages 609?616. MIT Press, Cambridge, MA, 2008.
[13] H.-C. Ho and G. Shieh. Two-stage U-statistics for hypothesis testing. Scandinavian Journal
of Statistics, 33(4):861?873, 2006.
[14] Norman Lloyd Johnson, Samuel Kotz, and Narayanaswamy Balakrishnan. Continuous univariate distributions. Distributions in statistics. Wiley, 2nd edition, 1994.
[15] A. Kleiner, A. Talwalkar, P. Sarkar, and M. I. Jordan. A scalable bootstrap for massive data.
Journal of the Royal Statistical Society, Series B, In Press.
[16] Andrey N Kolmogorov. Sulla determinazione empirica di una legge di distribuzione. Giornale
dellIstituto Italiano degli Attuari, 4(1):83?91, 1933.
[17] B Sch?olkopf. Support vector learning. Oldenbourg, M?unchen, Germany, 1997.
[18] D. Sejdinovic, A. Gretton, B. Sriperumbudur, and K. Fukumizu. Hypothesis testing using
pairwise distances and associated kernels. In ICML, 2012.
[19] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
[20] Nickolay Smirnov. Table for estimating the goodness of fit of empirical distributions. The
Annals of Mathematical Statistics, 19(2):279?281, 1948.
[21] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Hilbert space
embeddings and metrics on probability measures. Journal of Machine Learning Research,
11:1517?1561, 2010.
[22] G. Sz?ekely and M. Rizzo. Testing for equal distributions in high dimension. InterStat, (5),
November 2004.
[23] G. Sz?ekely, M. Rizzo, and N. Bakirov. Measuring and testing dependence by correlation of
distances. Ann. Stat., 35(6):2769?2794, 2007.
[24] M. Yamada, T. Suzuki, T. Kanamori, H. Hachiya, and M. Sugiyama. Relative density-ratio
estimation for robust distribution comparison. Neural Computation, 25(5):1324?1370, 2013.
9
| 5081 |@word briefly:1 mmds:1 smirnov:2 nd:1 open:1 bn:3 covariance:5 harder:1 moment:5 series:2 united:1 ecole:1 rkhs:5 bootstrapped:1 existing:1 com:2 exy:1 gmail:1 yet:2 must:1 written:2 fn:2 visible:3 oldenbourg:1 plot:1 drop:2 v:3 half:2 selected:1 fewer:1 es:1 ith:1 core:1 yamada:1 accepting:1 location:1 simpler:2 five:1 mathematical:5 consists:1 combine:2 pairwise:4 x0:8 p8:1 expected:9 karsten:2 moulines:1 decreasing:1 spherical:1 increasing:2 begin:1 blaschko:2 estimating:4 bounded:3 notation:2 moreover:2 underlying:1 null:24 lowest:3 provided:1 skewness:3 substantially:1 developed:1 informed:1 impractical:1 guarantee:1 every:4 zaremba:2 berlinet:1 control:1 unit:1 zl:1 grant:1 positive:2 engineering:1 treat:1 limit:12 consequence:1 approxb:1 mach:1 laurent:1 approximately:1 inria:2 might:1 k:2 conversely:1 challenging:2 fastest:1 limited:2 statistically:2 acknowledgment:1 testing:16 practice:2 block:32 differs:1 bootstrap:16 procedure:1 pontil:1 asymptotics:1 empirical:21 reject:1 indeterminate:1 word:1 spite:1 suggest:1 cannot:1 close:1 selection:8 wrongly:2 nb:2 context:1 influence:1 optimize:2 map:1 center:1 straightforward:1 duration:1 estimator:6 embedding:1 proving:1 variation:1 annals:2 construction:2 user:3 massive:1 hypothesis:15 lanckriet:1 trend:2 approximated:2 expensive:1 observed:2 solved:1 calculate:1 trade:1 shuffle:1 legge:1 complexity:8 compromise:1 efficiency:4 basis:1 easily:1 bouret:1 various:2 kolmogorov:2 separated:1 malabry:2 fast:2 describe:1 london:1 monte:1 lengthscale:3 woj:1 pearson:6 h0:26 heuristic:5 larger:3 statistic:43 timeout:1 subsamples:1 advantage:1 eigenvalue:2 dian:1 propose:3 product:1 zl2:1 fr:1 tu:3 relevant:1 baringhaus:1 iff:1 degenerate:3 achieve:2 poorly:1 alleviates:1 academy:1 olkopf:5 convergence:8 strathmann:1 converges:5 object:1 derive:2 andrew:1 stat:1 nearest:1 strong:1 una:1 p2:1 resemble:1 implies:6 indicate:1 empirica:1 rasch:2 merged:1 closely:2 centered:1 galen:1 exx0:1 sufficiently:1 confine:1 considered:1 normal:6 hall:1 cb:1 matthew:2 major:1 achieves:2 vary:1 purpose:1 favorable:1 estimation:2 teo:1 repetition:2 engel:1 weighted:3 fukumizu:5 mit:2 clearly:1 gaussian:18 always:1 aim:1 pn:1 varying:2 broader:1 mmdu:15 notational:1 unsatisfactory:1 consistently:1 indicates:3 reynaud:1 seamlessly:1 hk:1 contrast:2 baseline:2 am:1 talwalkar:1 helpful:1 dependent:1 inaccurate:2 entire:1 accept:1 france:2 germany:1 colt:1 denoted:1 retaining:2 equal:3 construct:2 extraction:1 sampling:2 chapman:1 identical:1 broad:1 icml:1 discrepancy:6 gamma:4 homogeneity:1 individual:1 phase:1 ourselves:1 replacement:1 interest:2 severe:1 mixture:1 extreme:2 yielding:1 arthur:4 necessary:2 respective:1 iv:1 re:1 instance:2 column:4 disadvantage:1 goodness:1 measuring:1 assignment:1 cost:5 introducing:1 subset:1 predictor:1 conducted:3 johnson:1 too:1 reported:1 straightforwardly:1 synthetic:7 andrey:1 density:2 fundamental:1 borgwardt:2 off:1 quickly:1 squared:1 central:12 hodges:1 recorded:1 again:1 choose:3 von:1 american:1 wojciech:1 return:1 bx:1 account:1 potential:1 nonasymptotic:1 centred:1 pooled:1 b2:1 lloyd:1 caused:1 tion:1 closed:1 analyze:2 hf:1 b6:1 ni:2 accuracy:1 variance:20 characteristic:1 efficiently:1 likewise:1 yield:1 musical:1 rejecting:2 accurately:1 carlo:1 za:1 mmd2:4 hachiya:1 reach:1 definition:1 failure:1 sriperumbudur:4 energy:2 pp:1 associated:1 di:2 degeneracy:3 gain:1 proved:1 efron:1 hilbert:3 amplitude:3 appears:1 originally:1 higher:1 follow:1 unchen:1 specify:1 evaluated:2 though:1 furthermore:1 just:2 stage:3 smola:3 until:1 correlation:1 ekely:2 quality:1 grows:2 effect:1 normalized:1 unbiased:4 requiring:2 norman:1 former:2 hence:5 alumnus:1 excluded:1 deal:1 round:1 indistinguishable:1 coincides:1 samuel:1 criterion:1 performs:2 wise:2 ifk:1 recently:3 behaves:1 empirically:2 discussed:1 kluwer:1 cambridge:2 consistency:5 fk:4 outlined:1 grid:1 erc:1 sugiyama:1 had:1 funded:1 scandinavian:1 fromont:1 similarity:2 summands:1 base:2 multivariate:2 recent:1 moderate:1 scenario:1 yi:2 captured:1 minimum:4 greater:1 additional:2 somewhat:1 seen:1 employed:2 determine:2 maximize:2 converge:1 signal:1 ii:15 multiple:5 sound:1 harchaoui:2 gretton:9 faster:4 match:2 bach:1 long:2 involving:1 scalable:1 determinazione:1 metric:1 histogram:4 kernel:59 iteration:1 mmd:14 sejdinovic:2 achieved:2 whereas:2 addressed:1 median:12 source:1 appropriately:1 biased:1 sch:5 interstat:1 comment:1 recording:1 tend:1 rizzo:2 member:2 balakrishnan:2 jordan:1 call:1 integer:1 split:2 easy:2 embeddings:2 concerned:1 variety:1 independence:3 xj:3 iii:2 variate:1 een:1 bandwidth:2 fit:1 inner:7 idea:1 tradeoff:2 whether:2 expression:1 narayanaswamy:1 song:2 returned:1 york:1 repeatedly:1 dramatically:1 http:1 generate:1 fulfilled:1 overly:2 correctly:1 estimated:1 per:1 tibshirani:1 broadly:1 write:1 four:2 threshold:8 achieving:1 drawn:1 costing:1 asymptotically:8 sum:6 run:1 powerful:2 lehmann:1 fourth:1 family:8 reasonable:1 kotz:1 summarizes:1 scaling:1 bound:1 guaranteed:1 distinguish:1 display:1 bakirov:1 fold:1 quadratic:3 topological:1 hilbertschmidt:1 precisely:1 n3:1 x2:1 unlimited:1 fulfils:1 extremely:1 jackknife:1 structured:1 according:2 centrale:1 poor:1 combination:3 belonging:1 march:1 smaller:2 beneficial:1 across:1 serfling:1 making:1 computationally:3 equation:5 previously:1 remains:4 turn:1 skew:2 needed:1 italiano:1 available:3 gaussians:1 apply:5 observe:1 rors:2 alternative:7 ho:4 original:1 thomas:1 denotes:2 running:1 include:1 remaining:4 ensure:2 cf:2 newton:1 music:3 quantile:3 society:2 question:1 parametric:2 strategy:9 dependence:1 dp:1 distance:12 thank:1 sensible:1 me:1 discriminant:2 eigenspectrum:1 reason:2 code:1 index:1 ratio:2 kolar:1 kingdom:1 unfortunately:2 difficult:2 negative:2 stated:1 kfk:1 design:1 anal:1 allowing:1 upper:1 vertical:1 datasets:1 finite:9 mladen:1 november:1 situation:1 looking:2 reproducing:3 arbitrary:2 mmdl:10 sarkar:1 introduced:1 emprical:1 pair:2 paris:1 required:5 bk:1 optimized:1 nip:2 bar:2 below:1 usually:1 agnan:1 regime:4 saclay:1 including:1 green:1 royal:2 power:9 event:1 demanding:2 overlap:1 rely:1 malte:2 turning:1 normality:3 scheme:2 imates:1 github:1 improve:1 brief:1 sulla:1 extract:1 review:1 berry:2 asymptotic:11 relative:2 interesting:1 analogy:1 eigendecomposition:1 foundation:1 h2:1 degree:3 sufficient:3 consistent:8 nondegenerate:1 shieh:4 twosample:1 copy:1 jth:1 kanamori:1 bias:4 fall:1 fifth:1 ghz:1 curve:6 calculated:1 xn:1 gram:8 cumulative:1 unweighted:1 computes:1 default:1 world:1 dimension:1 suzuki:1 franz:1 transaction:1 eyy0:1 approximate:1 emphasize:2 bernhard:2 sz:2 xi:5 degli:1 spectrum:7 continuous:2 kleiner:1 table:9 additionally:3 learn:1 transfer:1 robust:1 conservatism:1 complex:1 main:1 whole:1 subsample:1 edition:1 n2:6 repeated:2 positively:1 x1:3 quantiles:3 borel:4 gatsby:1 definiteness:1 wiley:2 ib:2 vanish:1 theorem:20 z0:1 attuari:1 er:1 dominates:1 incorporating:1 exists:1 albeit:1 effectively:1 drew:1 magnitude:1 album:1 simply:1 univariate:1 ez:2 visual:1 partially:1 u2:5 applies:2 ch:2 corresponds:1 ma:2 goal:1 identity:1 targeted:3 consequently:1 formulated:1 ann:1 fisher:3 considerable:1 infinite:4 except:1 averaging:6 conservative:6 called:1 zb:1 pas:2 experimental:4 indicating:1 college:1 support:2 latter:5 modulated:2 alexander:2 evaluate:1 audio:2 tested:1 ex:4 |
4,511 | 5,082 | On Flat versus Hierarchical Classification in
Large-Scale Taxonomies
Rohit Babbar, Ioannis Partalas, Eric Gaussier, Massih-Reza Amini
Universit? Joseph Fourier, Laboratoire Informatique de Grenoble
BP 53 - F-38041 Grenoble Cedex 9
[email protected]
Abstract
We study in this paper flat and hierarchical classification strategies in the context
of large-scale taxonomies. To this end, we first propose a multiclass, hierarchical data dependent bound on the generalization error of classifiers deployed in
large-scale taxonomies. This bound provides an explanation to several empirical
results reported in the literature, related to the performance of flat and hierarchical
classifiers. We then introduce another type of bound targeting the approximation
error of a family of classifiers, and derive from it features used in a meta-classifier
to decide which nodes to prune (or flatten) in a large-scale taxonomy. We finally
illustrate the theoretical developments through several experiments conducted on
two widely used taxonomies.
1
Introduction
Large-scale classification of textual and visual data into a large number of target classes has been
the focus of several studies, from researchers and developers in industry and academia alike. The
target classes in such large-scale scenarios typically have an inherent hierarchical structure, usually
in the form of a rooted tree, as in Directory Mozilla1 , or a directed acyclic graph, with a parentchild relationship. Various classification techniques have been proposed for deploying classifiers
in such large-scale taxonomies, from flat (sometimes referred to as big bang) approaches to fully
hierarchical one adopting a complete top-down strategy. Several attempts have also been made in
order to develop new classification techniques that integrate, at least partly, the hierarchy into the
objective function being optimized (as [3, 5, 10, 11] among others). These techniques are however
costly in practice and most studies either rely on a flat classifier, or a hierarchical one either deployed
on the original hierarchy or a simplified version of it obtained by pruning some nodes (as [15, 18])2 .
Hierarchical models for large scale classification however suffer from the fact that they have to make
many decisions prior to reach a final category. This intermediate decision making leads to the error
propagation phenomenon causing a decrease in accuracy. On the other hand, flat classifiers rely on a
single decision including all the final categories, a single decision that is however difficult to make as
it involves many categories, potentially unbalanced. It is thus very difficult to assess which strategy
is best and there is no consensus, at the time being, on to which approach, flat or hierarchical, should
be preferred on a particular category system.
In this paper, we address this problem and introduce new bounds on the generalization errors of
classifiers deployed in large-scale taxonomies. These bounds make explicit the trade-off that both
flat and hierarchical classifiers encounter in large-scale taxonomies and provide an explanation to
1
www.dmoz.org
The study in [19] introduces a slightly different simplification, through an embedding of both categories
and documents into a common space.
2
1
several empirical findings reported in previous studies. To our knowledge, this is the first time that
such bounds are introduced and that an explanation of the behavior of flat and hierarchical classifiers
is based on theoretical grounds. We also propose a well-founded way to select nodes that should be
pruned so as to derive a taxonomy better suited to the classification problem. Contrary to [4] that
reweighs the edges in a taxonomy through a cost sensitive loss function to achieve this goal, we use
here a simple pruning strategy that modifies the taxonomy in an explicit way.
The remainder of the paper is organized as follows: Section 2 introduces the notations used and
presents the generalization error bounds for classification in large-scale taxonomies. It also presents
the meta-classifier we designed to select those nodes that should be pruned in the original taxonomy.
Section 3 illustrates these developments via experiments conducted on several taxonomies extracted
from DMOZ and the International Patent Classification. The experimental results are in line with
results reported in previous studies, as well as with our theoretical developments. Finally, Section 4
concludes this study.
2
Generalization Error Analyses
Let X ? Rd be the input space and let V be a finite set of class labels. We further assume that
examples are pairs (x, v) drawn according to a fixed but unknown distribution D over X ? V . In
the case of hierarchical classification, the hierarchy of classes H = (V, E) is defined in the form of
a rooted tree, with a root ? and a parent relationship ? : V \ {?} ? V where ?(v) is the parent of
node v ? V \ {?}, and E denotes the set of edges with parent to child orientation. For each node
v ? V \ {?}, we further define the set of its sisters S(v) = {v 0 ? V \ {?}; v 6= v 0 ? ?(v) = ?(v 0 )}
and its daughters D(v) = {v 0 ? V \ {?}; ?(v 0 ) = v}. The nodes at the intermediary levels of
the hierarchy define general class labels while the specialized nodes at the leaf level, denoted by
Y = {y ? V : @v ? V, (y, v) ? E} ? V , constitute the set of target classes. Finally for each class
y in Y we define the set of its ancestors P(y) defined as
y
P(y) = {v1y , . . . , vkyy ; v1y = ?(y) ? ?l ? {1, . . . , ky ? 1}, vl+1
= ?(vly ) ? ?(vkyy ) =?}
For classifying an example x, we consider a top-down classifier making decisions at each level of the
hierarchy, this process sometimes referred to as the Pachinko machine selects the best class at each
level of the hierarchy and iteratively proceeds down the hierarchy. In the case of flat classification,
the hierarchy H is ignored, Y = V , and the problem reduces to the classical supervised multiclass
classification problem.
2.1
A hierarchical Rademacher data-dependent bound
Our main result is the following theorem which provides a data-dependent bound on the generalization error of a top-down multiclass hierarchical classifier. We consider here kernel-based hypotheses,
with K : X ?X ? R a PDS kernel and ? : X ? H its associated feature mapping function, defined
as :
FB = {f : (x, v) ? X ? V 7? h?(x), wv i | W = (w1 . . . , w|V | ), ||W||H ? B}
where W = (w1 . . . , w|V | ) is the matrix formed by the |V | weight vectors defining the kernel-based
P
2 1/2
hypotheses, h., .i denotes the dot product, and ||W||H =
is the L2H group norm
v?V ||wv ||
of W. We further define the following associated function class:
GFB = {gf : (x, y) ? X ? Y 7? min (f (x, v) ? 0max f (x, v 0 )) | f ? FB }
v?P(y)
v ?S(v)
For a given hypothesis f ? FB , the sign of its associated function gf ? GFB directly defines a
hierarchical classification rule for f as the top-down classification scheme outlined before simply
amounts to: assign x to y iff gf (x, y) > 0. The learning problem we address is then to
find a hypoth
esis f from FB such that the generalization error of gf ? GFB , E(gf ) = E(x,y)?D 1gf (x,y)?0 , is
minimal (1gf (x,y)?0 is the 0/1 loss, equal to 1 if gf (x, y) ? 0 and 0 otherwise).
The following theorem sheds light on the trade-off between flat versus hierarchical classification.
The notion of function class capacity used here is the empirical Rademacher complexity [1]. The
proof of the theorem is given in the supplementary material.
2
Theorem 1 Let S = ((x(i) , y (i) ))m
i=1 be a dataset of m examples drawn i.i.d. according to a
probability distribution D over X ? Y, and let A be a Lipschitz function with constant L dominating
the 0/1 loss; further let K : X ? X ? R be a PDS kernel and let ? : X ? H be the associated
feature mapping function. Assume that there exists R > 0 such that K(x, x) ? R2 for all x ? X .
Then, for all 1 > ? > 0, with probability at least (1 ? ?) the following hierarchical multiclass
classification generalization bound holds for all gf ? GFB :
m
1 X
8BRL X
E(gf ) ?
|D(v)|(|D(v)| ? 1) + 3
A(gf (x(i) , y (i) )) + ?
m i=1
m
v?V \Y
r
ln(2/?)
2m
(1)
where |D(v)| denotes the number of daughters of node v.
For flat multiclass classification, we recover the bounds of [12] by considering a hierarchy containing
a root node with as many daughters as there are categories. Note that the definition of functions
in GFB subsumes the definition of the margin function used for the flat multiclass classification
problems in [12], and that the factor 8L in the complexity term of the bound, instead of 4 in [12],
is due to the fact that we are using an L-Lipschitz loss function dominating the 0/1 loss in the
empirical Rademacher complexity.
Flat vs hierarchical classification on large-scale taxonomies. The generalization error is controlled in inequality (1) by a trade-off between the empirical error and the Rademacher complexity
of the class of classifiers. The Rademacher complexity term favors hierarchical classifiers over flat
Pk
ones, as any split of a set of category of size n in k parts n1 , ? ? ? , nk ( i=1 ni = n) is such that
Pk
2
2
i=1 ni ? n . On the other hand, the empirical error term is likely to favor flat classifiers vs
hierarchical ones, as the latter rely on a series of decisions (as many as the length of the path from
the root to the chosen category in Y) and are thus more likely to make mistakes. This fact is often
referred to as the propagation error problem in hierarchical classification.
On the contrary, flat classifiers rely on a single decision and are not prone to this problem (even
though the decision to be made is harder). When the classification problem in Y is highly unbalanced, then the decision that a flat classifier has to make is difficult; hierarchical classifiers still have
to make several decisions, but the imbalance problem is less severe on each of them. So, in this
case, even though the empirical error of hierarchical classifiers may be higher than the one of flat
ones, the difference can be counterbalanced by the Rademacher complexity term, and the bound in
Theorem 1 suggests that hierarchical classifiers should be preferred over flat ones.
On the other hand, when the data is well balanced, the Rademacher complexity term may not be
sufficient to overcome the difference in empirical errors due to the propagation error in hierarchical
classifiers; in this case, Theorem 1 suggests that flat classifiers should be preferred to hierarchical
ones. These results have been empirically observed in different studies on classification in largescale taxonomies and are further discussed in Section 3.
Similarly, one way to improve the accuracy of classifiers deployed in large-scale taxonomies is to
modify the taxonomy by pruning (sets of) nodes [18]. By doing so, one is flattening part of the
taxonomy and is once again trading-off the two terms in inequality (1): pruning nodes leads to
reduce the number of decisions made by the hierarchical classifier while maintaining a reasonable
Rademacher complexity. Even though it can explain several empirical results obtained so far, the
bound displayed in Theorem 1 does not provide a practical way to decide on whether to prune a
node or not, as it would involve the training of many classifiers which is impractical with large-scale
taxonomies. We thus turn towards another bound in the next section that will help us design a direct
and simple strategy to prune nodes in a taxonomy.
2.2
Asymptotic approximation error bounds
We now propose an asymptotic approximation error bound for a multiclass logistic regression (MLR)
classifier. We first consider the flat, multiclass case (V = Y), and then show how the bounds can
be combined in a typical top-down cascade, leading to the identification of important features that
control the variation of these bounds.
3
Considering a pivot class y ? ? Y, a MLR classifier, with parameters ? = {?0y , ?jy ; y ? Y \ {y ? }, j ?
{1, . . . , d}}, models the class posterior probabilities via a linear function in x = (xj )dj=1 (see for
example [13] p. 96) :
P (y|x; ?)y6=y?
=
P (y ? |x; ?)
=
exp(?0y +
Pd
1+
y
j=1 ?j xj )
P
Pd
y0
y 0 ?Y,y 0 6=y ? exp(?0 +
j=1
1+
P
0
?jy xj )
1
0
y
y 0 ?Y,y 0 6=y ? exp(?0 +
Pd
j=1
0
?jy xj )
The parameters ? are usually fit by maximum likelihood over a training set S of size m (denoted by
b in the following) and the decision rule for this classifier consists in choosing the class with the
?
m
highest class posterior probability :
b )
hm (x) = argmax P (y|x, ?
m
(2)
y?Y
The following lemma states to which extent the posterior probabilities with maximum likelihood esb may deviate from their asymptotic values obtained with maximum likelihood estimates
timates ?
m
b ).
when the training size m tends to infinity (denoted by ?
?
b be the maximum likelihood estimates of
Lemma 1 Let S be a training set of size m and let ?
m
b be the maximum likelihood estimates of parameters
the MLR classifier over S. Further, let ?
?
of MLR when m tends to infinity. ?
For all examples x, let R > 0 be the bound such that ?y ?
Pd
Y\{y ? }, exp(?0y + j=1 ?jy xj ) < R; then for all 1 > ? > 0, with probability at least (1 ? ?) we
have:
r
b ) ? P (y|x, ?
b ) < d R|Y|?0
?y ? Y, P (y|x, ?
m
?
?m
where ?0 = maxj,y ?jy and (?jy )y,j represent the components of the inverse (diagonal) Fisher inforb .
mation matrix at ?
?
b = {??y ; j ? {0, . . . , d}, y ? Y \ {y ? }},
Proof (sketch) By denoting the sets of parameters ?
m
j
b = {? y ; j ? {0, . . . , d}, y ? Y \ {y ? }}, and using the independence assumption and the
and ?
?
j
asymptotic normality of maximum likelihood estimates (see for example [17], p. 421), we have,
?
for 0 ? j ? d and ?y ? Y \ {y ? }: m(?bjy ? ?jy ) ? N (0, ?jy ) where the (?jy )y,i represent the
b . Let ?0 = maxj,y ? y . Then
components of the inverse (diagonal) Fisher information matrix at ?
?
j
using Chebyshev?s inequality, for 0 ? j ? d and ?y ? Y \{y ? } we have with probability
? at least
Pd
1 ? ?0 /2 , |?bjy ? ?jy | < ?m . Further ?x and ?y ? Y\{y ? }, exp(?0y + j=1 ?jy xj ) < R; using a
?1
Taylor development of the functions exp(x+) and (1+x+x)
and the union bound,one obtains
q
|Y|?0
b ) ? P (y|x, ?
b ) < d R .
that, ? > 0 and y ? Y with probability at least 1 ? 2 : P (y|x, ?
m
?
m
Setting
|Y|?0
2
to ?, and solving for gives the result.
Lemma 1 suggests that the predicted and asymptotic posterior probabilities are close to each other,
as the quantities they are based on are close to each other. Thus, provided that the asymptotic
posterior probabilities between the best two classes, for any given x, are not too close to each other,
the generalization error of the MLR classifier and the one of its asymptotic version should be similar.
Theorem 2 below states such a relationship, using the following function that measures the confusion
between the best two classes for the asymptotic MLR classifier defined as :
b )
h? (x) = argmax P (y|x, ?
?
y?Y
For any given x ? X , the confusion between the best two classes is defined as follows.
4
(3)
1
b ) be the best class posterior probability for x by the
Definition 1 Let f?
(x) = maxy?Y P (y|x, ?
?
2
b ) be the second best class
asymptotic MLR classifier, and let f? (x) = maxy?Y\h? (x) P (y|x, ?
?
posterior probability for x. We define the confusion of the asymptotic MLR classifier for a category
set Y as:
1
2
GY (? ) = P(x,y)?D (|f?
(x) ? f?
(x)| < 2? )
for a given ? > 0.
The following theorem states a relationship between the generalization error of a trained MLR classifier and its asymptotic version.
Theorem 2 For a multi-class classification problem in d dimensional feature space with a training
(i)
set of size m, {x(i) , y (i) }m
? X , y (i) ? Y, sampled i.i.d. from a probability distribution D, let
i=1 , x
hm and h? denote the multiclass logistic regression classifiers learned from a training set of finite
size m and its asymptotic version respectively, and let E(hm ) and E(h? ) be their generalization
errors. Then, for all 1 > ? > 0, with probability at least (1 ? ?) we have:
!
r
R|Y|?0
E(hm ) ? E(h? ) + GY d
(4)
?m
?
Pd
where R is a bound on the function exp(?0y + j=1 ?jy xj ), ?x ? X and ?y ? Y, and ?0 is a
constant.
Proof (sketch) The difference E(hm ) ? E(h? ) is bounded by the probability that the asymptotic
MLR classifier h? correctly classifies an example (x, y) ? X ? Y randomly chosen from D, while
hm misclassifies it. Using Lemma 1, for all ? ? (0, 1), ?x ? X , ?y ? Y, with probability at least
1 ? ?, we have:
r
R|Y|?0
b
b
P (y|x, ? m ) ? P (y|x, ? ? ) < d
?m
Thus, the decision made by the trained MLR and its asymptotic version on an example (x, y) differs
only if the distance between the two predicted classes of the asymptotic classifier is less than two
b and ?
b on that example;
times the distance between the posteriorprobabilitiesobtained with ?
m
?
q
R|Y|?0
and the probability of this is exactly GY d
, which upper-bounds E(hm ) ? E(h? ).
?m
Note that the quantity ?0 in Theorem 2 represents the largest value of the inverse (diagonal) Fisher
information matrix ([17]). It is thus the smallest value of the (diagonal) Fisher information matrix,
and is related to the smallest amount of information one has on the estimation of each parameter ?bjk .
This smallest amount of information is in turn related to the length (in number of occurrences) of
the longest (resp. shortest) class in Y denoted respectively by nmax and nmin as, the smaller they
are, the larger ?0 is likely to be.
2.3
A learning based node pruning strategy
Let us now consider a hierarchy of classes and a top-down classifier making decisions at each
level of the hierarchy. A node-based pruning strategy can be easily derived from the approximation bounds above. Indeed, any node v in the hierarchy H = (V, E) is associated with three
category sets: its sister categories with the node itself S0 (v) = S(v) ? {v}, its daughter categories, D(v), and the union of its sister and daughter categories, denoted F(v) = S(v) ? D(v).
These three sets of categories
?
?
are the ones involved before
and after the pruning of node
...
...
v. Let us now denote
the S(v) ? {v} v
F(v)
S0
MLR classifier by hmv learned
Pruning
from a set of sister categories D(v)
...
...
...
...
of node v and the node itself,
v
and by hD
m a MLR classifier
S0
v
learned from the set of daughter categories of node v (h?v and hD
? respectively denote their asymptotic versions). The following theorem is a direct extension of Theorem 2 to this setting.
5
Theorem 3 With the notations
defined above, for MLR classifiers,
? > 0, v ? V \ Y, one has, with
S0 (v)
probability at least 1 ?
S0
Rd2 |S0 (v)|?0
mS0 (v) 2
D(v)
+
Rd2 |D(v)|?0
mD(v) 2
:
S0
Dv
v
v
E(hmv ) + E(hD
m ) ? E(h? ) + E(h? ) + GS0 (v) () + GD(v) ()
`
{|Y ` |, mY ` , ?0Y ; Y ` ? {S0 (v), D(v)}} are constants related to the set of categories Y ` ?
v
{S0 (v), D(v)} and involved in the respective bounds stated in Theorem 2. Denoting by hF
m the
Fv
MLR classifier trained on the set F(v) and by h? its asymptotic version, Theorem 3 suggests that
one should prune node v if:
F(v)
|F(v)|?0
GF(v) () ? GS0 (v) () + GD(v) () and
mF(v)
S0 (v)
|S0 (v)|?0
?
mS0 (v)
D(v)
+
|D(v)|?0
mD(v)
(5)
Furthermore, the bounds obtained rely on the union bound and thus are not likely to be exploitable in practice. They nevertheless exhibit the factors that play an important role in assessing whether a particular trained classifier in the logistic regression family is close or not
to its asymptotic version. Each node v ? V can then be characterized by factors in the set
`
Y`
`
0
{|Y ` |, mY ` , nY
max , nmin , GY ` (.)|Y ? {S (v), D(v), F(v)}} which are involved in the estimation
of inequalities (5) above. We propose to estimate the confusion term GY ` (.) with two simple quantities: the average cosine similarity of all the pairs of classes in Y ` , and the average symmetric
Kullback-Leibler divergences between all the pairs in Y ` of class conditional multinomial distributions.
The procedure for collecting training data associates a positive (resp. negative) class to a node if
the pruning of that node leads to a final performance increase (resp. decrease). A meta-classifier is
then trained on these features using a training set from a selected class hierarchy. After the learning
phase, the meta-classifier is applied to each node of a new hierarchy of classes so as to identify
which nodes should be pruned. A simple strategy to adopt is then to prune nodes in sequence: starting from the root node, the algorithm checks which children of a given node v should be pruned by
creating the corresponding meta-instance and feeding the meta-classifier; the child that maximizes
the probability of the positive class is then pruned; as the set of categories has changed, we recalculate which children of v can be pruned, prune the best one (as above) and iterate this process till no
more children of v can be pruned; we then proceed to the children of v and repeat the process.
3
Discussion
We start our discussion by presenting results on different hierarchical datasets with different characteristics using MLR and SVM classifiers. The datasets we used in these experiments are two large
datasets extracted from the International Patent Classification (IPC) dataset3 and the publicly available DMOZ dataset from the second PASCAL large scale hierarchical text classification challenge
(LSHTC2)4 . Both datasets are multi-class; IPC is single-label and LSHTC2 multi-label with an
average of 1.02 categories per class. We created 4 datasets from LSHTC2 by splitting randomly the
first layer nodes (11 in total) of the original hierarchy in disjoint subsets. The classes for the IPC
and LSHTC2 datasets are organized in a hierarchy in which the documents are assigned to the leaf
categories only. Table 1 presents the characteristics of the datasets.
CR denotes the complexity ratio between hierarchical and flat classification,
given by the
P
Rademacher complexity term in Theorem 1:
|D(v)|(|D(v)|
?
1)
/
(|Y|(|Y|
? 1)); the
v?V \Y
same constants B, R and L are used in the two cases. As one can note, this complexity ratio always goes in favor of the hierarchal strategy, although it is 2 to 10 times higher on the IPC dataset,
compared to LSHTC2-1,2,3,4,5. On the other hand, the ratio of empirical errors (last column of
Table 1) obtained with top-down hierarchical classification over flat classification when using SVM
3
4
http://www.wipo.int/classifications/ipc/en/support/
http://lshtc.iit.demokritos.gr/
6
Dataset
LSHTC2-1
LSHTC2-2
LSHTC2-3
LSHTC2-4
LSHTC2-5
IPC
# Tr.
# Test
# Classes
# Feat.
Depth
CR
Error ratio
25,310
50,558
38,725
27,924
68,367
46,324
6,441
13,057
10,102
7,026
17,561
28,926
1,789
4,787
3,956
2,544
7,212
451
145,859
271,557
145,354
123,953
192,259
1,123,497
6
6
6
6
6
4
0.008
0.003
0.004
0.005
0.002
0.02
1.24
1.32
2.65
1.8
2.12
12.27
Table 1: Datasets used in our experiments along with the properties: number of training examples,
test examples, classes and the size of the feature
space, the depth of the hierarchy and the complexity
P
ratio of hierarchical over the flat case ( v?V \Y |D(v)|(|D(v)| ? 1)/|Y|(|Y| ? 1)), the ratio of
empirical error for hierarchical and flat models.
with a linear kernel is this time higher than 1, suggesting the opposite conclusion. The error ratio is
furthermore really important on IPC compared to LSHTC2-1,2,3,4,5. The comparison of the complexity and error ratios on all the datasets thus suggests that the flat classification strategy may be
preferred on IPC, whereas the hierarchical one is more likely to be efficient on the LSHTC datasets.
This is indeed the case, as is shown below.
To test our simple node pruning strategy, we learned binary classifiers aiming at deciding whether
to prune a node, based on the node features described in the previous section. The label associated
to each node in this training set is defined as +1 if pruning the node increases the accuracy of the
hierarchical classifier by at least 0.1, and -1 if pruning the node decreases the accuracy by more than
0.1. The threshold at 0.1 is used to avoid too much noise in the training set. The meta-classifier
is then trained to learn a mapping from the vector representation of a node (based on the above
features) and the labels {+1; ?1}. We used the first two datasets of LSHTC2 to extract the training
data while LSHTC2-3, 4, 5 and IPC were employed for testing.
The procedure for collecting training data is repeated for the MLR and SVM classifiers resulting in
three meta-datasets of 119 (19 positive and 100 negative), 89 (34 positive and 55 negative) and 94 (32
positive and 62 negative) examples respectively. For the binary classifiers, we used AdaBoost with
random forest as a base classifier, setting the number of trees to 20, 50 and 50 for the MLR and SVM
classifiers respectively and leaving the other parameters at their default values. Several values have
been tested for the number of trees ({10, 20, 50, 100 and 200}), the depth of the trees ({unrestricted,
5, 10, 15, 30, 60}), as well as the number of iterations in AdaBoost ({10, 20, 30}). The final values
were selected by cross-validation on the training set (LSHTC2-1 and LSHTC2-2) as the ones that
maximized accuracy and minimized false-positive rate in order to prevent degradation of accuracy.
We compare the fully flat classifier (FL) with the fully hierarchical (FH) top-down Pachinko machine, a random pruning (RN) and the proposed pruning method (PR) . For the random pruning
we restrict the procedure to the first two levels and perform 4 random prunings (this is the average
number of prunings that are performed in our approach). For each dataset we perform 5 independent runs for the random pruning and we record the best performance. For MLR and SVM, we use
the LibLinear library [8] and apply the L2-regularized versions, setting the penalty parameter C by
cross-validation.
The results on LSHTC2-3,4,5 and IPC are reported in Table 2. On all LSHTC datasets flat classification performs worse than the fully hierarchy top-down classification, for all classifiers. These
results are in line with complexity and empirical error ratios for SVM estimated on different collections and shown in table 1 as well as with the results obtained in [14, 7] over the same type of
taxonomies. Further, the work by [14] demonstrated that class hierarchies on LSHTC datasets suffer from rare categories problem, i.e., 80% of the target categories in such hierarchies have less than
5 documents assigned to them.
As a result, flat methods on such datasets face unbalanced classification problems which results in
smaller error ratios; hierarchical classification should be preferred in this case. On the other hand,
for hierarchies such as the one of IPC, which are relatively well balanced and do not suffer from
the rare categories phenomenon, flat classification performs at par or even better than hierarchical
7
LSHTC2-3
MLR
FL
RN
FH
PR
LSHTC2-4
SVM
??
??
MLR
??
LSHTC2-5
SVM
??
MLR
??
IPC
SVM
MLR
SVM
??
0.528 0.535 0.497 0.501 0.542 0.547
0.546 0.446
0.493?? 0.517?? 0.478?? 0.484?? 0.532?? 0.536? 0.547? 0.458??
0.484?? 0.498?? 0.473?? 0.476? 0.526? 0.527 0.552? 0.465??
0.480 0.493 0.469 0.472 0.522 0.523 0.544 0.450
Table 2: Error results across all datasets. Bold typeface is used for the best results. Statistical
significance (using micro sign test (s-test) as proposed in [20]) is denoted with ? for p-value<0.05
and with ?? for p-value<0.01.
classification. This is in agreement with the conclusions obtained in recent studies, as [2, 9, 16, 6],
in which the datasets considered do not have rare categories and are more well-balanced.
The proposed hierarchy pruning strategy aims to adapt the given taxonomy structure for better classification while maintaining the ancestor-descendant relationship between a given pair of nodes. As
shown in Table 2, this simple learning based pruning strategy leads to statistically significant better
results for all three classifiers compared to both the original taxonomy and a randomly pruned one.
A similar result is reported in [18] through a pruning of an entire layer of the hierarchy, which can be
seen as a generalization, even though empirical in nature, of the pruning strategy retained here. Another interesting approach to modify the original taxonomy is presented in [21]. In this study, three
other elementary modification operations are considered, again with an increase of performance.
4
Conclusion
We have studied in this paper flat and hierarchical classification strategies in the context of largescale taxonomies, through error generalization bounds of multiclass, hierarchical classifiers. The
first theorem we have introduced provides an explanation to several empirical results related to the
performance of such classifiers. We have also introduced a well-founded way to simplify a taxonomy
by selectively pruning some of its nodes, through a meta-classifier. The features retained in this
meta-classifier derive from the error generalization bounds we have proposed. The experimental
results reported here (as well as in other papers) are in line with our theoretical developments and
justify the pruning strategy adopted.
This is the first time, to our knowledge, that a data dependent error generalization bound is proposed for multiclass, hierarchical classifiers and that a theoretical explanation is provided for the
performance of flat and hierarchical classification strategies in large-scale taxonomies. In particular,
there is, up to now, no consensus on which classification scheme, flat or hierarchical, to use on a
particular category system. One of our main conclusions is that top-down hierarchical classifiers
are well suited to unbalanced, large-scale taxonomies, whereas flat ones should be preferred for
well-balanced taxonomies.
Lastly, our theoretical development also suggests possibilities to grow a hierarchy of classes from
a (large) set of categories, as has been done in several studies (e.g. [2]). We plan to explore this in
future work.
5
Acknowledgments
This work was supported in part by the ANR project Class-Y, the Mastodons project Garguantua, the
LabEx PERSYVAL-Lab ANR-11-LABX-0025 and the European project BioASQ (grant agreement
no. 318652).
8
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[2] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In
Advances in Neural Information Processing Systems 23, pages 163?171, 2010.
[3] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In
Proceedings 13th ACM International Conference on Information and Knowledge Management
(CIKM), pages 78?87. ACM, 2004.
[4] O. Dekel. Distribution-calibrated hierarchical classification. In Advances in Neural Information Processing Systems 22, pages 450?458. 2009.
[5] O. Dekel, J. Keshet, and Y. Singer. Large margin hierarchical classification. In Proceedings of
the 21st International Conference on Machine Learning, pages 27?35, 2004.
[6] J. Deng, S. Satheesh, A. C. Berg, and F.-F. Li. Fast and balanced: Efficient label tree learning
for large scale object recognition. In Advances in Neural Information Processing Systems 24,
pages 567?575, 2011.
[7] S. Dumais and H. Chen. Hierarchical classification of web content. In Proceedings of the 23rd
annual international ACM SIGIR conference, pages 256?263, 2000.
[8] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for
large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[9] T. Gao and D. Koller. Discriminative learning of relaxed hierarchy for large-scale visual recognition. In IEEE International Conference on Computer Vision (ICCV), pages 2072?2079, 2011.
[10] S. Gopal and Y. Y. A. Niculescu-Mizil. Regularization framework for large scale hierarchical
classification. In Large Scale Hierarchical Classification, ECML/PKDD Discovery Challenge
Workshop, 2012.
[11] S. Gopal, Y. Yang, B. Bai, and A. Niculescu-Mizil. Bayesian models for large-scale hierarchical classification. In Advances in Neural Information Processing Systems 25, 2012.
[12] Y. Guermeur. Sample complexity of classifiers taking values in Rq , application to multi-class
SVMs. Communications in Statistics - Theory and Methods, 39, 2010.
[13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer New
York Inc., 2001.
[14] T.-Y. Liu, Y. Yang, H. Wan, H.-J. Zeng, Z. Chen, and W.-Y. Ma. Support vector machines
classification with a very large-scale taxonomy. SIGKDD, 2005.
[15] H. Malik. Improving hierarchical SVMs by hierarchy flattening and lazy classification. In 1st
Pascal Workshop on Large Scale Hierarchical Classification, 2009.
[16] F. Perronnin, Z. Akata, Z. Harchaoui, and C. Schmid. Towards good practice in large-scale
learning for image classification. In Computer Vision and Pattern Recognition, pages 3482?
3489, 2012.
[17] M. Schervish. Theory of Statistics. Springer Series in Statistics. Springer New York Inc., 1995.
[18] X. Wang and B.-L. Lu. Flatten hierarchies for large-scale hierarchical text categorization. In
5th International Conference on Digital Information Management, pages 139?144, 2010.
[19] K. Q. Weinberger and O. Chapelle. Large margin taxonomy embedding for document categorization. In Advances in Neural Information Processing Systems 21, pages 1737?1744,
2008.
[20] Y. Yang and X. Liu. A re-examination of text categorization methods. In Proceedings of the
22nd annual International ACM SIGIR conference, pages 42?49. ACM, 1999.
[21] J. Zhang, L. Tang, and H. Liu. Automatically adjusting content taxonomies for hierarchical
classification. In Proceedings of the 4th Workshop on Text Mining, 2006.
9
| 5082 |@word version:9 norm:1 nd:1 dekel:2 hsieh:1 bioasq:1 tr:1 harder:1 liblinear:2 bai:1 liu:3 series:2 denoting:2 document:5 academia:1 hofmann:1 designed:1 rd2:2 v:2 leaf:2 selected:2 directory:1 gfb:5 record:1 provides:3 node:41 org:1 zhang:1 along:1 direct:2 descendant:1 consists:1 introduce:2 indeed:2 behavior:1 pkdd:1 multi:5 automatically:1 considering:2 provided:2 classifies:1 notation:2 bounded:1 maximizes:1 project:3 developer:1 finding:1 l2h:1 impractical:1 collecting:2 shed:1 exactly:1 universit:1 classifier:65 control:1 imag:1 grant:1 before:2 positive:6 modify:2 tends:2 mistake:1 esb:1 aiming:1 path:1 studied:1 suggests:6 statistically:1 directed:1 practical:1 bjk:1 acknowledgment:1 testing:1 practice:3 union:3 differs:1 procedure:3 empirical:14 cascade:1 lshtc:4 flatten:2 nmax:1 targeting:1 close:4 context:2 risk:1 www:2 demonstrated:1 modifies:1 go:1 starting:1 sigir:2 splitting:1 rule:2 hd:3 embedding:3 notion:1 variation:1 resp:3 target:4 hierarchy:27 play:1 hypothesis:3 agreement:2 associate:1 element:1 recognition:3 timates:1 observed:1 role:1 wang:2 recalculate:1 decrease:3 trade:3 highest:1 balanced:5 rq:1 pd:8 complexity:16 trained:6 solving:1 eric:1 easily:1 iit:1 various:1 informatique:1 fast:1 brl:1 choosing:1 widely:1 supplementary:1 dominating:2 larger:1 otherwise:1 anr:2 favor:3 statistic:3 itself:2 final:4 sequence:1 cai:1 propose:4 product:1 fr:1 massih:1 causing:1 remainder:1 iff:1 till:1 achieve:1 ky:1 parent:3 assessing:1 rademacher:10 vkyy:2 categorization:4 object:1 help:1 derive:3 illustrate:1 develop:1 predicted:2 involves:1 trading:1 material:1 feeding:1 assign:1 generalization:15 really:1 elementary:1 extension:1 hold:1 considered:2 ground:1 exp:7 deciding:1 mapping:3 adopt:1 smallest:3 fh:2 estimation:2 intermediary:1 label:8 sensitive:1 largest:1 always:1 mation:1 aim:1 gaussian:1 gopal:2 avoid:1 cr:2 derived:1 focus:1 longest:1 likelihood:6 check:1 sigkdd:1 dependent:4 perronnin:1 niculescu:2 vl:1 typically:1 entire:1 koller:1 ancestor:2 selects:1 classification:52 among:1 orientation:1 denoted:6 pascal:2 development:6 misclassifies:1 plan:1 equal:1 once:1 y6:1 represents:1 future:1 minimized:1 others:1 simplify:1 inherent:1 grenoble:2 micro:1 randomly:3 divergence:1 maxj:2 argmax:2 phase:1 n1:1 attempt:1 friedman:1 highly:1 possibility:1 mining:1 severe:1 introduces:2 light:1 edge:2 dataset3:1 respective:1 tree:7 taylor:1 re:1 theoretical:6 minimal:1 instance:1 industry:1 column:1 cost:1 subset:1 rare:3 conducted:2 gr:1 too:2 reported:6 my:2 combined:1 gd:2 calibrated:1 st:2 international:8 dumais:1 off:4 w1:2 again:2 management:2 containing:1 wan:1 worse:1 creating:1 leading:1 li:1 suggesting:1 de:1 gy:5 persyval:1 ioannis:1 subsumes:1 bold:1 int:1 inc:2 performed:1 root:4 lab:1 doing:1 start:1 recover:1 hf:1 ass:1 vly:1 formed:1 accuracy:6 ni:2 publicly:1 characteristic:2 maximized:1 identify:1 partalas:1 identification:1 bayesian:1 mastodon:1 lu:1 researcher:1 explain:1 deploying:1 reach:1 definition:3 involved:3 associated:6 proof:3 sampled:1 dataset:5 adjusting:1 knowledge:3 organized:2 akata:1 higher:3 supervised:1 adaboost:2 done:1 though:4 typeface:1 furthermore:2 lastly:1 nmin:2 hand:5 sketch:2 web:1 zeng:1 propagation:3 defines:1 logistic:3 regularization:1 assigned:2 symmetric:1 iteratively:1 leibler:1 lastname:1 rooted:2 cosine:1 presenting:1 complete:1 confusion:4 performs:2 image:1 common:1 lshtc2:19 specialized:1 multinomial:1 mlr:23 empirically:1 ipc:12 patent:2 reza:1 discussed:1 significant:1 rd:2 outlined:1 similarly:1 bjy:2 grangier:1 dj:1 dot:1 chapelle:1 similarity:1 base:1 posterior:8 recent:1 scenario:1 hierarchal:1 meta:10 wv:2 inequality:4 binary:2 seen:1 unrestricted:1 relaxed:1 employed:1 prune:7 deng:1 shortest:1 harchaoui:1 reduces:1 characterized:1 adapt:1 cross:2 lin:1 jy:12 controlled:1 regression:3 vision:2 wipo:1 kernel:5 sometimes:2 adopting:1 represent:2 iteration:1 whereas:2 laboratoire:1 grow:1 leaving:1 cedex:1 contrary:2 structural:1 yang:3 intermediate:1 split:1 bengio:1 iterate:1 xj:7 fit:1 counterbalanced:1 independence:1 hastie:1 restrict:1 opposite:1 reduce:1 multiclass:11 chebyshev:1 pivot:1 whether:3 bartlett:1 penalty:1 suffer:3 proceed:1 york:2 constitute:1 ignored:1 involve:1 amount:3 svms:2 category:26 http:2 sign:2 estimated:1 disjoint:1 correctly:1 sister:4 gs0:2 per:1 cikm:1 tibshirani:1 group:1 nevertheless:1 threshold:1 drawn:2 prevent:1 graph:1 schervish:1 run:1 inverse:3 family:2 reasonable:1 decide:2 decision:14 bound:31 layer:2 fl:2 simplification:1 fan:1 annual:2 infinity:2 bp:1 flat:35 fourier:1 min:1 pruned:8 relatively:1 guermeur:1 according:2 smaller:2 slightly:1 across:1 y0:1 joseph:1 making:3 alike:1 modification:1 maxy:2 dv:1 iccv:1 pr:2 ln:1 turn:2 singer:1 end:1 adopted:1 available:1 operation:1 demokritos:1 apply:1 hierarchical:55 amini:1 occurrence:1 encounter:1 weinberger:1 original:5 denotes:4 top:10 maintaining:2 classical:1 objective:1 malik:1 quantity:3 strategy:17 costly:1 md:2 diagonal:4 exhibit:1 distance:2 capacity:1 extent:1 consensus:2 dmoz:3 length:2 retained:2 relationship:5 ratio:10 gaussier:1 difficult:3 taxonomy:33 potentially:1 stated:1 daughter:6 negative:4 design:1 satheesh:1 unknown:1 perform:2 imbalance:1 upper:1 datasets:17 finite:2 hmv:2 displayed:1 ecml:1 defining:1 communication:1 rn:2 introduced:3 pair:4 optimized:1 hypoth:1 fv:1 learned:4 textual:1 address:2 proceeds:1 usually:2 below:2 firstname:1 pattern:1 challenge:2 including:1 max:2 explanation:5 rely:5 regularized:1 examination:1 largescale:2 mizil:2 normality:1 scheme:2 improve:1 library:2 created:1 concludes:1 hm:7 extract:1 schmid:1 gf:12 deviate:1 prior:1 literature:1 text:4 l2:1 discovery:1 rohit:1 asymptotic:18 fully:4 loss:5 par:1 interesting:1 acyclic:1 versus:2 validation:2 digital:1 integrate:1 labex:1 sufficient:1 s0:11 classifying:1 prone:1 changed:1 repeat:1 last:1 supported:1 face:1 taking:1 overcome:1 depth:3 default:1 pachinko:2 fb:4 made:4 collection:1 simplified:1 founded:2 far:1 pruning:24 obtains:1 preferred:6 kullback:1 feat:1 parentchild:1 discriminative:1 table:7 learn:1 nature:1 forest:1 improving:1 european:1 flattening:2 pk:2 main:2 significance:1 big:1 noise:1 child:6 repeated:1 exploitable:1 referred:3 en:1 deployed:4 v1y:2 ny:1 explicit:2 tang:1 down:11 theorem:18 r2:1 svm:10 exists:1 mendelson:1 workshop:3 false:1 esis:1 keshet:1 illustrates:1 margin:3 nk:1 chen:2 mf:1 suited:2 simply:1 likely:5 explore:1 gao:1 visual:2 lazy:1 chang:1 springer:3 extracted:2 acm:5 labx:1 weston:1 conditional:1 ma:1 goal:1 bang:1 towards:2 lipschitz:2 fisher:4 content:2 typical:1 justify:1 lemma:4 degradation:1 total:1 partly:1 experimental:2 select:2 selectively:1 berg:1 support:3 latter:1 unbalanced:4 tested:1 phenomenon:2 |
4,512 | 5,083 | Robust Bloom Filters for Large Multilabel
Classification Tasks
Moustapha Ciss?e
LIP6, UPMC
Sorbonne Universit?e
Paris, France
[email protected]
Nicolas Usunier
UT Compi`egne, CNRS
Heudiasyc UMR 7253
Compi`egne, France
[email protected]
Thierry Artieres, Patrick Gallinari
LIP6, UPMC
Sorbonne Universit?e
Paris, France
[email protected]
Abstract
This paper presents an approach to multilabel classification (MLC) with a large
number of labels. Our approach is a reduction to binary classification in which
label sets are represented by low dimensional binary vectors. This representation
follows the principle of Bloom filters, a space-efficient data structure originally
designed for approximate membership testing. We show that a naive application
of Bloom filters in MLC is not robust to individual binary classifiers? errors. We
then present an approach that exploits a specific feature of real-world datasets
when the number of labels is large: many labels (almost) never appear together.
Our approach is provably robust, has sublinear training and inference complexity
with respect to the number of labels, and compares favorably to state-of-the-art
algorithms on two large scale multilabel datasets.
1
Introduction
Multilabel classification (MLC) is a classification task where each input may be associated to several
class labels, and the goal is to predict the label set given the input. This label set may, for instance,
correspond to the different topics covered by a text document, or to the different objects that appear
in an image. The standard approach to MLC is the one-vs-all reduction, also called Binary Relevance (BR) [16], in which one binary classifier is trained for each label to predict whether the label
should be predicted for that input. While BR remains the standard baseline for MLC problems, a lot
of attention has recently been given to improve on it. The first main issue that has been addressed
is to improve prediction performances at the expense of computational complexity by learning correlations between labels [5] [8], [9] or considering MLC as an unstructured classification problem
over label sets in order to optimize the subset 0/1 loss (a loss of 1 is incurred as soon as the method
gets one label wrong) [16]. The second issue is to design methods that scale to a large number of
labels (e.g. thousands or more), potentially at the expense of prediction performances, by learning
compressed representations of labels sets with lossy compression schemes that are efficient when
label sets have small cardinality [6]. We propose here a new approach to MLC in this latter line of
work. A ?MLC dataset? refers here to a dataset with a large number of labels (at least hundreds to
thousands), in which the target label sets are smaller than the number of labels by one or several
orders of magnitude, which is the common in large-scale MLC datasets collected from the Web.
The major difficulty in large-scale MLC problems is that the computational complexity of training
and inference of standard methods is at least linear in the number of labels L. In order to scale better
with L, our approach to MLC is to encode individual labels on K-sparse bit vectors of dimension B,
where B L, and use a disjunctive encoding of label sets (i.e. bitwise-OR of the codes of the labels
that appear in the label set). Then, we learn one binary classifier for each of the B bits of the coding
vector, similarly to BR (where K = 1 and B = L). By setting K > 1, one can encode individual
labels unambiguously on far less than L bits while keeping the disjunctive encoding unambiguous
1
for a large number of labels sets of small cardinality. Compared to BR, our scheme learns only B
binary classifiers instead of L, while conserving the desirable property that the classifiers can be
trained independently and thus in parallel, making our approach suitable for large-scale problems.
The critical point of our method is a simple scheme to select the K representative bits (i.e. those
set to 1) of each label with two desirable properties. First, the encoding of ?relevant? label sets are
unambiguous with the disjunctive encoding. Secondly, the decoding step, which recovers a label
set from an encoding vector, is robust to prediction errors in the encoding vector: in particular, we
prove that the number of incorrectly predicted labels is no more than twice the number of incorrectly
predicted bits. Our (label) encoding scheme relies on the existence of mutually exclusive clusters
of labels in real-life MLC datasets, where labels in different clusters (almost) never appear in the
same label set, but labels from the same clusters can. Our encoding scheme makes that B becomes
smaller as more clusters of similar size can be found. In practice, a strict partitioning of the labels
into mutually exclusive clusters does not exist, but it can be fairly well approximated by removing a
few of the most frequent labels, which are then dealt with the standard BR approach, and clustering
the remaining labels based on their co-occurrence matrix. That way, we can control the encoding
dimension B and deal with the computational cost/prediction accuracy tradeoff.
Our approach was inspired and motivated by Bloom filters [2], a well-known space-efficient randomized data structure designed for approximate membership testing. Bloom filters use exactly the
principle of encoding objects (in our case, labels) by K-sparse vectors and encode a set with the
disjunctive encoding of its members. The filter can be queried with one object and the answer is
correct up to a small error probability. The data structure is randomized because the representative
bits of each object are obtained by random hash functions; under uniform probability assumptions
for the encoded set and the queries, the encoding size B of the Bloom filter is close to the information theoretic limit for the desired error rate. Such ?random? Bloom filter encodings are our main
baseline, and we consider our approach as a new design of the hash functions and of the decoding
algorithm to make Bloom filter robust to errors in the encoding vector. Some background on (random) Bloom filters, as well as how to apply them for MLC is given in the next section. The design
of hash functions and the decoding algorithm are then described in Section 3, where we also discuss
the properties of our method compared to related works of [12, 15, 4]. Finally, in Section 4, we
present experimental results on two benchmark MLC datasets with a large number of classes, which
show that our approach obtains promising performances compared to existing approaches.
2
Bloom Filters for Multilabel Classification
Our approach is a reduction from MLC to binary classification, where the rules of the reduction follow a scheme inspired by the encoding/decoding of sets used in Bloom filters. We first describe the
formal framework to fix the notation and the goal of our approach, and then give some background
on Bloom filters. The main contribution of the paper is described in the next section.
Framework Given a set of labels L of size L, MLC is the problem of learning a prediction function c that, for each possible input x, predicts a subset of L. Throughout the paper, the letter y
is used for label sets, while the letter ` is used for individual labels. Learning is carried out on a
training set ((x1 , y1 ), ..., (xn , yn )) of inputs for which the desired label sets are known; we assume
the examples are drawn i.i.d. from the data distribution D.
A reduction from MLC to binary classification relies on an encoding function e : y ? L 7?
B
(e1 (y), ..., eB (y)) ? {0, 1} , which maps subsets of L to bit vectors of size B. Then, each of the
B bits are learnt independently by training a sequence of binary classifiers ?
e = (?
e1 , ..., e?B ), where
each e?j is trained on ((x1 , ej (y1 )), ..., (xn , ej (yn ))). Given a new instance x, the encoding ?
e(x) is
predicted, and the final multilabel classifier c is obtained by decoding ?
e(x), i.e. ?x, c(x) = d(?
e(x)).
The goal of this paper is to design the encoding and decoding functions so that two conditions are
met. First, the code size B should be small compared to L, in order to improve the computational
cost of training and inference relatively to BR. Second, the reduction should be robust in the sense
that the final performance, measured by the expected Hamming loss HL (c) between the target label
sets y and the predictions c(x) is not much larger than HB (?
e), the average error of the classifiers we
learn. Using ? to denote the symmetric difference between sets, HL and HB are defined by:
h
i
PB
HL (c) = E(x,y)?D |c(x)?y|
and HB (?
e) = B1 j=1 E(x,y)?D 1{ej(y)6=e?j(y)} .
(1)
L
2
label
`1
`2
`3
`4
`5
`6
`7
`8
h1
2
2
1
1
1
3
3
2
h2
3
4
2
5
2
5
4
5
h3
e({`1 }) e({`4 })
5
0
5
1
5 h1 (`1 ) 1
0
3
h2 (`1 ) 1
1
6
0
0
6
5 h3 (`1 ) 1
1
6
0
0
e({`1 , `3 , `4 })
= e({`1 , `4 })
1
1
1
0
1
0
`3
example: (x, {`1 , `4 })
c(x) = d(?
e(x)) = {`3 }
e?1 (x) 1
e?2 (x) 1
e?3 (x) 0
e?4 (x) 0
e?5 (x) 1
e?6 (x) 0
Figure 1: Examples of a Bloom filter for a set L = {`1 , ..., `8 } with 8 elements, using 3 hash
functions and 6 bits). (left) The table gives the hash values for each label. (middle-left) For each
label, the hash functions give the index of the bits that are set to 1 in the 6-bit boolean vector. The
examples of the encodings for {`1 } and {`4 } are given. (middle-right) Example of a false positive:
the representation of the subset {`1 , `4 } includes all the representative bits of label `3 so that is `3
would be decoded erroneously. (right) Example of propagation of errors: a single erroneous bit in
the label set encoding, together with a false positive, leads to three label errors in the final prediction.
Bloom Filters Given the set of labels L, a Bloom filter (BF) of size B uses K hash functions from
L to {1, ..., B}, which we denote hk : L ? {1, ..., B} for k ? {1, ..., K} (in a standard approach,
each value hk (`) is chosen uniformly at random in {1, ..., B}). These hash functions define the
representative bits (i.e. non-zero bits) of each label: each singleton {`} for ` ? L is encoded by a bit
vector of size B with at most K non-zero bits, and each hash function gives the index of one of these
nonzero bits in the bit vector. Then, the Bloom filter encodes a subset y ? L by a bit vector of size
B, defined by the bitwise OR of the bit vectors of the elements of y. Given the encoding of a set, the
Bloom filter can be queried to test the membership of any label `; the filter answers positively if all
the representative bits of ` are set to 1, and negatively otherwise. A negative answer of the Bloom
filter is always correct; however, the bitwise OR of label set encodings leads to the possibility of
false positives, because even though any two labels have different encodings, the representative bits
of one label can be included in the union of the representative bits of two or more other labels.
Figure 1 (left) to (middle-right) give representative examples of the encoding/querying scheme of
Bloom filters and an example of false positive.
Bloom Filters for MLC The encoding and decoding schemes of BFs are appealing to define
the encoder e and the decoder d in a reduction of MLC to binary classification (decoding consists
in querying each label), because they are extremely simple and computationally efficient, but also
because, if we assume that B L and that the random hash functions are perfect, then, given a
C ln(2)
random subset of size C L, the false positive rate of a BF encoding this set is in O( 21 B
) for
the optimal number of hash functions. This rate is, up to a constant factor, the information theoretic
limit [3]. Indeed, as shown in Section 4 the use of Bloom filters with random hash functions for
MLC (denoted S-BF for Standard BF hereafter) leads to rather good results in practice.
Nonetheless, there is much room for improvement with respect to the standard approach above.
First, the distribution of label sets in usual MLC datasets is far from uniform. On the one hand, this
leads to a substantial increase in the error rate of the BF compared to the theoretical calculation, but,
on the other hand, it is an opportunity to make sure that false positive answers only occur in cases
that are detectable from the observed distribution of label sets: if y is a label set and ` 6? y is a false
positive given e(y), ` can be detected as a false positive if we know that ` never (or rarely) appears
together with the labels in y. Second and more importantly, the decoding approach of BFs is far from
robust to errors in the predicted representation. Indeed, BFs are able to encode subsets on B L
bits because each bit is representative for several labels. In the context of MLC, the consequence is
that any single bit incorrectly predicted may include in (or exclude from) the predicted label set all
the labels for which it is representative. Figure 1 (right) gives an example of the situation, where
a single error in the predicted encoding, added with a false positive, results in 3 errors in the final
prediction. Our main contribution, which we detail in the next section, is to use the non-uniform
distribution of label sets to design the hash functions and a decoding algorithm to make sure that
any incorrectly predicted bit has a limited impact on the predicted label set.
3
3
From Label Clustering to Hash Functions and Robust Decoding
We present a new method that we call Robust Bloom Filters (R-BF). It improves over random hash
functions by relying on a structural feature of the label sets in MLC datasets: many labels are never
observed in the same target set, or co-occur with a probability that is small enough to be neglected.
We first formalize the structural feature we use, which is a notion of mutually exclusive clusters of
labels, then we describe the hash functions and the robust decoding algorithm that we propose.
3.1
Label Clustering
The strict formal property on which our approach is based is the following: given P subsets
L1 , ..., LP of L, we say that (L1 , ..., LP ) are mutually exclusive clusters if no target set contains
labels from more than one of each Lp , p = 1..P , or, equivalently, if the following condition holds:
[
?p ? {1, ..., P }, Py?DY
= 0.
(2)
y ? Lp 6= ? and y ?
Lp0 6= ?
p0 6=p
where DY is the marginal distribution over label sets. For the disjunctive encoding of Bloom filters,
this assumption implies that if we design the hash functions such that the false positives for a label
set y belong to a cluster that is mutually exclusive with (at least one) label in y, then the decoding
step can detect and correct it. To that end, it is sufficient to ensure that for each bit of the Bloom filter,
all the labels for which this bit is representative belong to mutually exclusive clusters. This will lead
us to a simple two-step decoding algorithm cluster identification/label set prediction in the cluster.
In terms of compression ratio B
L , we can directly see that the more mutually exclusive clusters, the
more labels can share a single bit of the Bloom filter. Thus, more (balanced) mutually exclusive
clusters will result in smaller encoding vectors B, making our method more efficient overall.
This notion of mutually exclusive clusters is much stronger than our basic observation that some pair
of labels rarely or never co-occur with each other, and in practice it may be difficult to find a partition of L into mutually exclusive clusters because the co-occurrence graph of labels is connected.
However, as we shall see in the experiments, after removing the few most central labels (which we
call hubs, and in practice roughly correspond to the most frequent labels), the labels can be clustered
into (almost) mutually exclusive labels using a standard clustering algorithm for weighted graph.
In our approach, the hubs are dealt with outside the Bloom filter, with a standard binary relevance
scheme. The prediction for the remaining labels is then constrained to predict labels from at most
one of the clusters. From the point of view of prediction performance, we loose the possibility of
predicting arbitrary label sets, but gain the possibility of correcting a non-negligible part of the incorrectly predicted bits. As we shall see in the experiments, the trade-off is very favorable. We would
like to note at this point that dealing with the hubs or the most frequent labels with binary relevance
may not particularly be a drawback of our approach: the occurrence probabilities of the labels is
long-tailed, and the first few labels may be sufficiently important to deserve a special treatment.
What really needs to be compressed is the large set of labels that occur rarely.
To find the label clustering, we first build the co-occurrence graph and remove the hubs using the
degree centrality measure. The remaining labels are then clustered using Louvain algorithm [1]; to
control the number of clusters, a maximum size is fixed and larger clusters are recursively clustered
until they reach the desired size. Finally, to obtain (almost) balanced clusters, the smallest clusters
are merged. Both the number of hubs and the cluster size are parameters of the algorithm, and, in
Section 4, we show how to choose them before training at negligible computational cost.
3.2
Hash functions and decoding
From now on, we assume that we have access to a partition of L into mutually exclusive clusters (in
practice, this corresponds to the labels that remain after removal of the hubs).
Hash functions Given the parameter K, constructing K-sparse encodings follows two conditions:
1. two labels from the same cluster cannot share any representative bit;
2. two labels from different clusters can share at most K ? 1 representative bits.
4
bit representative
index
for labels
1
2
3
4
5
6
{1, 2, 3, 4, 5}
{1, 6, 7, 8, 9}
{2, 6, 10, 11, 12}
{3, 7, 10, 13, 14}
{4, 8, 11, 13, 15}
{5, 9, 12, 14, 15}
bit
index
7
8
9
10
11
12
representative
for labels
cluster
index
labels in
cluster
cluster
index
labels in
{16, 17, 18, 19, 20}
{16, 21, 22, 23, 24}
{17, 21, 25, 26, 27}
{18, 22, 25, 28, 29}
{19, 23, 26, 28, 30}
{20, 24, 27, 29, 30}
1
2
3
4
5
6
7
8
{1, 15}
{2, 16}
{3, 17}
{4, 18}
{5, 19}
{6, 20}
{7, 21}
{8, 22}
9
10
11
12
13
14
15
{9, 23}
{10, 24}
{11, 25}
{12, 26}
{13, 27}
{14, 28}
{15, 29}
cluster
Figure 2: Representative bits for 30 labels partitioned into P = 15 mutually exclusive label clusters
of size R = 2, using K = 2 representative bits per label and batches of Q = 6 bits. The table on the
right gives the label clustering. The injective mapping between labels and subsets of bits is defined
by g : ` 7? {g1 (`) = (1 + `)/6, g2 (`) = 1 + ` mod 6} for ` ? {1, ..., 15} and, for ` ? {15, ..., 30},
it is defined by ` 7? {(6 + g1 (` ? 15), 6 + g1 (` ? 15)}.
Finding an encoding that satisfies the conditions above is not difficult if we consider, for each label,
the set of its representative bits. In the rest of the paragraph, we say that a bit of the Bloom filter ?is
used for the encoding of a label? when this bit may be a representative bit of the label. If the bit ?is
not used for the encoding of a label?, then it cannot be a representative bit of the label.
Let us consider the P mutually exclusive label clusters, and denote by R the size of the largest
Q
cluster. To satisfy Condition 1., we find an encoding on B = R.Q bits for Q ? K and P ? K
as follows. For a given r ? {1, ..., R}, the r-th batch of Q successive bits (i.e. the bits of index
(r ? 1)Q + 1, (r ? 1)Q + 2, ..., rQ) is used only for the encoding of the r-th label of each cluster.
That way, each batch of Q bits is used for the encoding of a single label per cluster (enforcing the
first condition) but can be used for the encoding
of P labels overall. For the Condition 2., we notice
Q
that given a batch of Q bits, there are K
different subsets of K bits. We then injectively map the (at
most) P labels to the subsets of size K to define the K representative bits of these labels. In the end,
with a Bloom filter of
encodings that satisfy the two conditions
size B = R.Q, we have K-sparse
Q
Q
above for L ? R. K
labels partitioned into P ? K
mutually exclusive clusters of size at most R.
Figure 2 gives an example of such an encoding. In the end, the scheme is most efficient (in terms of
the compression ratio B/L) when the clusters are perfectly balanced and when P is exactly equal
Q
to K
for some Q. For instance, for K = 2 that we use in our experiments, if P = Q(Q+1)
for
2
p
some integer Q, and if the clusters are almost perfectly balanced, then B/L ? 2/P . The ratio
becomes more and more favorable as both Q increases and K increases up to Q/2, but the number
of different clusters P must also be large. Thus, the method should be most efficient on datasets
with a very large number of labels, assuming that P increases with L in practice.
Decoding and Robustness We now present the decoding algorithm, followed by a theoretical
guarantee that each incorrectly predicted bit in the Bloom filter cannot imply more than 2 incorrectly
predicted labels.
Given an example x and its predicted encoding ?
e(x), the predicted label set d(?
e(x)) is computed
with the following two-step process, in which we say that a bit is ?representative of one cluster? if it
is a representative bit of one label in the cluster:
a. (Cluster Identification) For each cluster Lp , compute its cluster score sp defined as the
number of its representative bits that are set to 1 in ?
e(x). Choose Lp? for p? ? arg max sp ;
p?{1,...,P }
b. (Label Set Prediction) For each label ` ? Lp?, let s0` be the number of representative bits of
s0
` set to 1 in ?
e(x); add ` to d(?
e(x)) with probability K` .
In case of ties in the cluster identification, the tie-breaking rule can be arbitrary. For instance,
in our experiments, we use logistic regression as base learners for binary classifiers, so we have
access to posterior probabilities of being 1 for each bit of the Bloom filter. In case of ties in the
cluster identification, we restrict our attention to the clusters that maximize the cluster score, and we
recompute their cluster scores using the posterior probabilities instead of the binary decision. The
5
cluster which maximizes the new cluster score is chosen. The choice of a randomized prediction for
the labels avoids a single incorrectly predicted bit to result in too many incorrectly predicted labels.
The robustness of the encoding/decoding scheme is proved below:
Theorem 1 Let the label set L , and let (L1 , ..., LP ) be a partition of L satisfying (2). Assume that
the encoding function satisfies Conditions 1. and 2., and that decoding is performed in the two-step
process a.-b. Then, using the definitions of HL and HB of (1), we have:
2B B
HL (d ? ?
e) ?
H (?
e)
L
for a K-sparse encoding, where the expectation in HL is also taken over the randomized predictions.
Sketch of proof Let (x, y) be
an example. We
compare the expected number of incorrectly predicted
labels H L (y, d(?
e(x))) = E |d(?
e(x)) ? y| (expectation taken over the randomized prediction) and
PB
the number of incorrectly predicted bits H B (?
e(x) , e(y)) = j=1 1{?ej (x)6=ej(y)} . Let us denote by
p? the index of the cluster in which y is included, and p? the index of the cluster chosen in step a. We
consider the two following cases:
p? = p? : if the cluster is correctly identified then each incorrectly predicted bit that is representative
1
in H L (y, d(?
e(x))). All other bits do not matter. We thus have
for the cluster costs K
1
L
B
H (y, d(?
e(x))) ? K H (?
e(x) , e(y)).
p? 6= p? : If the cluster is not correctly identified, then H L (y, d(?
e(x))) is the sum of (1) the number
of labels that should be predicted but are not (|y|), and (2) the labels that are in the predicted
H L(y,d(?
e(x)))
, we first notice that there are
label set but that should not. To bound the ratio H
B(?
e(x),e(y))
at least as much representative bits predicted as 1 for Lp? than for Lp? . Since each label
of Lp? shares at most K ? 1 representative bits with a label of Lp? , there are at least |y|
incorrect bits. Moreover, the maximum contribution to labels predicted in the incorrect
1
cluster by correctly predicted bits is at most K?1
K |y|. Each additional contribution of K
L
in H (y, d(?
e(x))) comes from a bit that is incorrectly predicted to 1 instead of 0 (and is
representative for Lp?). Let us denote by k the number of such contributions. Then, the most
defavorable ratio
H L(y,d(?
e(x)))
H B(?
e(x),e(y))
is smaller than max
k?0
k
K
+|y|(1+ K?1
K )
max(|y|,k)
=
|y|
K?1
K +|y|(1+ K )
|y|
Taking the expectation over (x, y) completes the proof ( B
L comes from normalization factors).
3.3
= 2.
Comparison to Related Works
The use of correlations between labels has a long history in MLC [11] [8] [14], but correlations
are most often used to improve prediction performances at the expense of computational complexity
through increasingly complex models, rather than to improve computational complexity using strong
negative correlations as we do here.
The most closely related works to ours is that of Hsu et al. [12], where the authors propose an approach based on compressed sensing to obtain low-dimension encodings of label sets.
Their approach has the advantage of a theoretical guarantee in terms of regret (rather than error as we do), without strong structural assumptions on the label sets; the complexity of learning scales in O(C ln(L)) where C is the number of labels in label sets. For our approach, since
?
Q
? 4Q/2 / 8?Q, it could be possible to obtain a logarithmic rate under the rather strong
Q
2
Q??
assumption that the number of clusters P increases linearly with L. As we shall see in our experiments, however, even with a rather large number of labels (e.g. 1 000), the asymptotic logarithmic
rate is far from being achieved for all methods. In practice, the main drawback of their method is
that they need to know the size of the label set to predict. This is an extremely strong requirement
when classification decisions are needed (less strong when only a ranking of the labels is needed),
in contrast to our method which is inherently designed for classification.
Another related work is that of [4], which is based on SVD for dimensionality reduction rather than
compressed sensing. Their method can exploit correlations between labels, and take classification
decisions. However, their approach is purely heuristic, and no theoretical guarantee is given.
6
Figure 3: (left) Unrecoverable Hamming loss (UHL) due to label clustering of the R-BF as a function
of the code size B on RCV-Industries (similar behavior on the Wikipedia1k dataset). The optimal
curve represents the best UHL over different settings (number of hubs,max cluster size) for a given
code size. (right) Hamming loss vs code size on RCV-Industries for different methods.
2.8
hubs = 0
hubs = 20
Optimal
4
Test Hamming loss?103
uncoverable loss?104
5
3
2
1
0
100
150
200
4
2.6
2.4
2.2
2
250
B
SBF
PLST
CS-OMP
R-BF
60
80
100 120 140 160 180 200 220
B
Experiments
We performed experiments on two large-scale real world datasets: RCV-Industries, which is a subset
of the RCV1 dataset [13] that considers the industry categories only (we used the first testing set file
from the RCV1 site instead of the original training set since it is larger), and Wikipedia1k, which is
a subsample of the wikipedia dataset release of the 2012 large scale hierarchical text classification
challenge [17]. On both datasets, the labels are originally organized in a hierarchy, but we transformed them into plain MLC datasets by keeping only leaf labels. For RCV-Industries, we obtain
303 labels for 72, 334 examples. The average cardinality of label sets is 1.73 with a maximum of
30; 20% of the examples have label sets of cardinality ? 2. For Wikipedia1k, we kept the 1, 000
most represented leaf labels, which leads to 110, 530 examples with an average label set cardinality
of 1.11 (max. 5). 10% of the examples have label sets of cardinality ? 2.
We compared our methods, the standard (i.e. with random hash function) BF (S-BF) and the Robust
BF (R-BF) presented in section 3, to binary relevance (BR) and to three MLC algorithms designed
for MLC problems with a large number of labels: a pruned version of BR proposed in [7] (called
BR-Dekel from now on), the compressed sensing approach (CS) of [12] and the principal label
space transformation (PLST) [4]. BR-Dekel consists in removing from the prediction all the labels
whose probability of a true positive (PTP) on the validation set is smaller than the probability of
a false positive (PFP). To control the code size B in BR-Dekel, we rank the labels based on the
ratio P T P/P F P and keep the top B labels. In that case, the inference complexity is similar to BF
models, but the training complexity is still linear in L. For CS, following [4], we used orthogonal
matching poursuit (CS-OMP) for decoding and selected the number of labels to predict in the range
{1, 2, . . . , 30}, on the validation set. For S-BF, the number of (random) hash functions K is also
chosen on the validation set among {1, 2, . . . , 10}. For R-BF, we use K = 2 hash functions.
The code size B can be freely set for all methods except for Robust BF, where different settings of
the maximum cluster size and the number of hubs may lead to the same code size. Since the use
of a label clustering in R-BF leads to unrecoverable errors even if the classifiers perform perfectly
well (because labels of different clusters cannot be predicted together), we chose the max cluster size
among {10, 20, . . . , 50} and the number of hubs (among {0, 10, 20, 30, . . . , 100} for RCV-Industries
and {0, 50, 100, . . . , 300} for Wikipedia1k) that minimize the resulting unrecoverable Hamming loss
(UHL), computed on the train set. Figure 3 (left) shows how the UHL naturally decreases when
the number of hubs increases since then the method becomes closer to BR, but at the same time
the overall code size B increases because it is the sum of the filter?s size and the number of hubs.
Nonetheless, we can observe on the figure that the UHL rapidly reaches a very low value, confirming
that the label clustering assumption is reasonable in practice.
All the methods involve training binary classifiers or regression functions. On both datasets, we used
linear functions with L2 regularization (the global regularization factor in PLST and CS-OMP, as
well as the regularization factor of each binary classifier in BF and BR approaches, were chosen on
the validation set among {0, 0.1, . . . , 10?5 }), and unit-nom normalized TF-IDF features. We used
the Liblinear [10] implementation of logistic regression as base binary classifier.
7
Table 1: Test Hamming loss (HL, in %), micro (m-F1) and macro (M-F1) F1-scores. B is code
size. The results of the significance test for a p-value less than 5% are denoted ? to indicate the best
performing method using the same B and ? to indicate the best performing method overall.
Classifier
BR
BR-Dekel
S-BF
R-BF
CS-OMP
PLST
B
303
150
200
150
200
150
200
150
200
150
200
HL
m-F1
RCV-Industries
0.200?
72.43?
0.308
46.98
0.233
65.78
0.223
67.45
0.217
68.32
0.210?
71.31?
0.205?
71.86?
0.246
67.59
0.245
67.71
0.226
68.87
0.221
70.35
M-F1
B
47.82?
30.14
40.09
40.29
40.95
43.44
44.57
45.22?
45.82?
32.36
40.78
1000
250
500
250
500
240
500
250
500
250
500
HL
m-F1
Wikipedia1K
0.0711
55.96
0.0984
22.18
0.0868
38.33
0.0742
53.02
0.0734
53.90
0.0728?
55.85
0.0705??
57.31
0.0886
57.96?
0.0875
58.46??
0.0854
42.45
0.0828
45.95
M-F1
34.7
12.16
24.52
31.41
32.57
34.65
36.85
41.84?
42.52??
09.53
16.73
Results Table 1 gives the test performances of all the methods on both datasets for different code
sizes. We are mostly interested in the Hamming loss but we also provide the micro and macro
F-measure. The results are averaged over 10 random splits of train/validation/test of the datasets,
respectively containing 50%/25%/25% of the data. The standard deviations of the values are negligible (smaller than 10?3 times the value of the performance measure). Our BF methods seem
to clearly outperform all other methods and R-BF yields significant improvements over S-BF. On
Wikipedia1k, with 500 classifiers, the Hamming loss (in %) of S-BF is 0.0734 while it is only 0.0705
for RBF. This performance is similar to that of BR?s (0.0711) which uses twice as many classifiers.
The simple pruning strategy BR-Dekel is the worst baseline on both datasets, confirming that considering all classes is necessary on these datasets. CS-OMP reaches a much higher Hamming loss
(about 23% worst than BR on both datasets when using 50% less classifiers). CS-OMP achieves the
best performance on the macro-F measure though. This is because the size of the predicted label
sets is fixed for CS, which increases recall but leads to poor precision. We used OMP as decoding
procedure for CS since it seemed to perform better than Lasso and Correlation decoding (CD)[12](
for instance, on RCV-Industries with a code size of 500, OMP achieves a Hamming loss of 0.0875
while the Hamming loss is 0.0894 for Lasso and 0.1005 for CD). PLST improves over CS-OMP
but its performances are lower than those of S-BF (about 3.5% on RCV-industries and 13% and
Wikipedia when using 50% less classifiers than BR). The macro F-measure indicates that PLST
likely suffers from class imbalance (only the most frequent labels are predicted), probably because
the label set matrix on which SVD is performed is dominated by the most frequent labels. Figure 3
(right) gives the general picture of the Hamming loss of the methods on a larger range of code sizes.
Overall, R-BF has the best performances except for very small code sizes because the UHL becomes
too high.
Runtime analysis Experiments were performed on a computer with 24 intel Xeon 2.6 GHz CPUs.
For all methods, the overall training time is dominated by the time to train the binary classifiers
or regressors, which depends linearly on the code size. For test, the time is also dominated by
the classifiers? predictions, and the decoding algorithm of R-BF is the fastest. For instance, on
Wikipedia1k, training one binary classifier takes 12.35s on average, and inference with one classifier
(for the whole test dataset) takes 3.18s. Thus, BR requires about 206 minutes (1000 ? 12.35s) for
training and 53m for testing on the whole test set. With B = 500, R-BF requires about half that
time, including the selection of the number of hubs and the max. cluster size at training time, which
is small (computing the UHL of a R-BF configuration takes 9.85s, including the label clustering
step, and we try less than 50 of them). For the same B, encoding for CS takes 6.24s and the SVD in
PSLT takes 81.03s, while decoding takes 24.39s at test time for CS and 7.86s for PSLT.
Acknowledgments
This work was partially supported by the French ANR as part of the project Class-Y (ANR-10BLAN-02) and carried out in the framework of the Labex MS2T (ANR-11-IDEX-0004-02).
8
References
[1] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of communities
in large networks. Journal of Statistical Mechanics: Theory and Experiment., 10, 2008.
[2] B. H. Bloom. Space/time trade-offs in hash coding with allowable errors. Commun. ACM,
13(7):422?426, 1970.
[3] L. Carter, R. Floyd, J. Gill, G. Markowsky, and M. Wegman. Exact and approximate membership testers. In Proceedings of the tenth annual ACM symposium on Theory of computing,
STOC ?78, pages 59?65, New York, NY, USA, 1978. ACM.
[4] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label
classification. In NIPS, pages 1538?1546, 2012.
[5] W. Cheng and E. H?ullermeier. Combining instance-based learning and logistic regression for
multilabel classification. Machine Learning, 76(2-3):211?225, 2009.
[6] K. Christensen, A. Roginsky, and M. Jimeno. A new analysis of the false positive rate of a
bloom filter. Inf. Process. Lett., 110(21):944?949, Oct. 2010.
[7] O. Dekel and O. Shamir. Multiclass-multilabel classification with more classes than examples.
volume 9, pages 137?144, 2010.
[8] K. Dembczynski, W. Cheng, and E. H?ullermeier. Bayes optimal multilabel classification via
probabilistic classifier chains. In ICML, pages 279?286, 2010.
[9] K. Dembczynski, W. Waegeman, W. Cheng, and E. H?ullermeier. On label dependence and loss
minimization in multi-label classification. Machine Learning, 88(1-2):5?45, 2012.
[10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large
linear classification. J. Mach. Learn. Res., 9:1871?1874, June 2008.
[11] B. Hariharan, S. V. N. Vishwanathan, and M. Varma. Large Scale Max-Margin Multi-Label
Classification with Prior Knowledge about Densely Correlated Labels. In Proceedings of International Conference on Machine Learning, 2010.
[12] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing.
In NIPS, pages 772?780, 2009.
[13] RCV1. RCV1 Dataset, http://www.daviddlewis.com/resources/testcollections/rcv1/.
[14] J. Read, B. Pfah ringer, G. Holmes, and E. Frank. Classifier chains for multi-label classification. In Proceedings of the European Conference on Machine Learning and Knowledge
Discovery in Databases: Part II, ECML PKDD ?09, pages 254?269, Berlin, Heidelberg, 2009.
Springer-Verlag.
[15] F. Tai and H.-T. Lin. Multilabel classification with principal label space transformation. Neural
Computation, 24(9):2508?2542, 2012.
[16] G. Tsoumakas, I. Katakis, and I. Vlahavas. A Review of Multi-Label Classification Methods. In Proceedings of the 2nd ADBIS Workshop on Data Mining and Knowledge Discovery
(ADMKD 2006), pages 99?109, 2006.
[17] Wikipedia. Wikipedia Dataset, http://lshtc.iit.demokritos.gr/.
9
| 5083 |@word version:1 middle:3 compression:3 stronger:1 nd:1 bf:29 dekel:6 hsieh:1 p0:1 recursively:1 liblinear:2 reduction:9 configuration:1 contains:1 score:5 hereafter:1 document:1 ours:1 bitwise:3 existing:1 com:1 must:1 partition:3 confirming:2 cis:1 remove:1 designed:4 v:2 hash:23 half:1 leaf:2 selected:1 egne:2 recompute:1 successive:1 nom:1 zhang:1 symposium:1 incorrect:2 prove:1 consists:2 paragraph:1 blondel:1 indeed:2 expected:2 behavior:1 pkdd:1 roughly:1 mechanic:1 multi:6 inspired:2 relying:1 utc:1 cpu:1 considering:2 cardinality:6 becomes:4 project:1 idex:1 notation:1 moreover:1 maximizes:1 katakis:1 rcv:8 what:1 finding:1 transformation:2 guarantee:3 tie:3 exactly:2 universit:2 classifier:24 wrong:1 runtime:1 gallinari:1 partitioning:1 control:3 unit:1 appear:4 yn:2 positive:13 negligible:3 before:1 limit:2 consequence:1 encoding:45 mach:1 chose:1 umr:1 twice:2 eb:1 co:5 fastest:1 limited:1 range:2 averaged:1 acknowledgment:1 testing:4 practice:8 union:1 regret:1 procedure:1 lshtc:1 matching:1 refers:1 get:1 cannot:4 close:1 selection:1 context:1 py:1 optimize:1 www:1 map:2 attention:2 independently:2 unstructured:1 correcting:1 rule:2 holmes:1 importantly:1 bfs:3 varma:1 notion:2 target:4 hierarchy:1 shamir:1 exact:1 us:2 element:2 approximated:1 particularly:1 satisfying:1 predicts:1 artieres:1 database:1 observed:2 disjunctive:5 wang:1 worst:2 thousand:2 connected:1 trade:2 decrease:1 substantial:1 balanced:4 rq:1 complexity:8 neglected:1 multilabel:10 trained:3 purely:1 negatively:1 learner:1 iit:1 represented:2 train:3 fast:1 describe:2 query:1 detected:1 outside:1 whose:1 encoded:2 larger:4 heuristic:1 say:3 otherwise:1 compressed:6 encoder:1 anr:3 g1:3 final:4 sequence:1 advantage:1 propose:3 fr:3 frequent:5 macro:4 relevant:1 combining:1 rapidly:1 conserving:1 cluster:61 requirement:1 perfect:1 object:4 measured:1 h3:2 thierry:1 strong:5 predicted:29 c:13 implies:1 come:2 met:1 indicate:2 tester:1 drawback:2 correct:3 merged:1 filter:34 closely:1 tsoumakas:1 fix:1 clustered:3 really:1 f1:7 secondly:1 hold:1 sufficiently:1 mapping:1 predict:5 major:1 achieves:2 smallest:1 favorable:2 label:176 daviddlewis:1 lip6:4 largest:1 tf:1 weighted:1 unfolding:1 minimization:1 offs:1 clearly:1 always:1 rather:6 ej:5 blan:1 encode:4 heudiasyc:1 release:1 june:1 improvement:2 rank:1 indicates:1 hk:2 contrast:1 baseline:3 sense:1 detect:1 inference:5 cnrs:1 membership:4 transformed:1 france:3 interested:1 provably:1 issue:2 classification:24 overall:6 among:4 denoted:2 arg:1 art:1 lp0:1 fairly:1 constrained:1 marginal:1 special:1 equal:1 never:5 uhl:7 aware:1 represents:1 icml:1 ullermeier:3 micro:2 few:3 densely:1 individual:4 possibility:3 mining:1 unrecoverable:3 chain:2 closer:1 injective:1 necessary:1 orthogonal:1 desired:3 re:1 theoretical:4 instance:7 industry:9 xeon:1 boolean:1 cost:4 deviation:1 subset:12 hundred:1 uniform:3 gr:1 too:2 upmc:2 answer:4 learnt:1 international:1 randomized:5 probabilistic:1 off:1 decoding:24 together:4 central:1 ms2t:1 containing:1 choose:2 exclude:1 singleton:1 coding:2 includes:1 matter:1 satisfy:2 ranking:1 depends:1 performed:4 h1:2 lot:1 view:1 try:1 bayes:1 parallel:1 dembczynski:2 contribution:5 minimize:1 hariharan:1 accuracy:1 correspond:2 yield:1 dealt:2 identification:4 pfp:1 history:1 lambiotte:1 reach:3 suffers:1 ptp:1 definition:1 nonetheless:2 naturally:1 associated:1 proof:2 recovers:1 hamming:12 gain:1 hsu:2 dataset:8 treatment:1 proved:1 recall:1 knowledge:3 ut:1 improves:2 dimensionality:1 organized:1 formalize:1 appears:1 originally:2 higher:1 follow:1 unambiguously:1 though:2 correlation:6 until:1 hand:2 sketch:1 plst:6 web:1 langford:1 propagation:1 french:1 logistic:3 lossy:1 usa:1 normalized:1 true:1 regularization:3 read:1 symmetric:1 nonzero:1 deal:1 floyd:1 unambiguous:2 allowable:1 theoretic:2 l1:3 image:1 recently:1 common:1 wikipedia:4 volume:1 belong:2 significant:1 queried:2 similarly:1 access:2 patrick:1 add:1 base:2 posterior:2 commun:1 inf:1 verlag:1 binary:21 life:1 additional:1 gill:1 omp:9 freely:1 maximize:1 ii:1 desirable:2 calculation:1 long:2 lin:3 e1:2 impact:1 prediction:19 basic:1 regression:4 expectation:3 normalization:1 achieved:1 background:2 addressed:1 completes:1 moustapha:1 rest:1 strict:2 sure:2 file:1 probably:1 member:1 mod:1 seem:1 call:2 integer:1 structural:3 split:1 enough:1 hb:4 testcollections:1 perfectly:3 restrict:1 identified:2 lasso:2 br:20 tradeoff:1 multiclass:1 whether:1 motivated:1 york:1 covered:1 involve:1 category:1 carter:1 http:2 outperform:1 exist:1 notice:2 per:2 correctly:3 shall:3 waegeman:1 pb:2 drawn:1 tenth:1 bloom:32 kept:1 graph:3 sum:2 letter:2 ringer:1 almost:5 throughout:1 reasonable:1 sorbonne:2 decision:3 dy:2 bit:68 bound:1 followed:1 cheng:3 fan:1 annual:1 occur:4 vishwanathan:1 idf:1 encodes:1 dominated:3 erroneously:1 extremely:2 pruned:1 rcv1:5 performing:2 relatively:1 poor:1 smaller:6 remain:1 increasingly:1 lefebvre:1 partitioned:2 appealing:1 lp:13 kakade:1 making:2 hl:9 christensen:1 taken:2 computationally:1 ln:2 mutually:15 remains:1 resource:1 discus:1 detectable:1 loose:1 tai:1 needed:2 know:2 nusunier:1 end:3 usunier:1 demokritos:1 apply:1 observe:1 hierarchical:1 occurrence:4 vlahavas:1 centrality:1 batch:4 robustness:2 existence:1 original:1 top:1 clustering:10 remaining:3 include:1 ensure:1 opportunity:1 exploit:2 build:1 added:1 strategy:1 exclusive:15 usual:1 dependence:1 berlin:1 decoder:1 topic:1 collected:1 considers:1 enforcing:1 assuming:1 code:15 index:9 ratio:6 equivalently:1 difficult:2 mostly:1 potentially:1 stoc:1 favorably:1 expense:3 frank:1 negative:2 design:6 implementation:1 perform:2 imbalance:1 observation:1 datasets:17 benchmark:1 ecml:1 incorrectly:13 wegman:1 situation:1 y1:2 mlc:27 arbitrary:2 community:1 pair:1 paris:2 louvain:1 nip:2 deserve:1 able:1 below:1 challenge:1 max:8 including:2 suitable:1 critical:1 difficulty:1 predicting:1 scheme:11 improve:5 imply:1 library:1 picture:1 carried:2 naive:1 text:2 prior:1 review:1 l2:1 removal:1 discovery:2 asymptotic:1 loss:16 sublinear:1 querying:2 validation:5 h2:2 labex:1 incurred:1 degree:1 sufficient:1 s0:2 principle:2 share:4 cd:2 supported:1 last:2 soon:1 keeping:2 formal:2 taking:1 sparse:5 ghz:1 curve:1 dimension:4 xn:2 world:2 avoids:1 plain:1 seemed:1 lett:1 author:1 regressors:1 far:4 approximate:3 obtains:1 pruning:1 keep:1 dealing:1 global:1 b1:1 sbf:1 tailed:1 table:4 promising:1 learn:3 robust:12 nicolas:1 inherently:1 heidelberg:1 complex:1 european:1 constructing:1 sp:2 significance:1 main:5 linearly:2 whole:2 subsample:1 x1:2 positively:1 site:1 representative:29 intel:1 ny:1 precision:1 decoded:1 breaking:1 learns:1 removing:3 theorem:1 erroneous:1 minute:1 specific:1 hub:14 sensing:4 workshop:1 false:12 compi:2 magnitude:1 margin:1 chen:1 logarithmic:2 likely:1 g2:1 partially:1 chang:1 springer:1 corresponds:1 satisfies:2 relies:2 acm:3 oct:1 goal:3 rbf:1 room:1 included:2 except:2 uniformly:1 principal:2 called:2 experimental:1 svd:3 rarely:3 select:1 guillaume:1 latter:1 relevance:4 correlated:1 |
4,513 | 5,084 | Gaussian Process Conditional Copulas with
Applications to Financial Time Series
James Robert Lloyd
Engineering Department
University of Cambridge
[email protected]
Jos?e Miguel Hern?andez-Lobato
Engineering Department
University of Cambridge
[email protected]
Daniel Hern?andez-Lobato
Computer Science Department
Universidad Aut?onoma de Madrid
[email protected]
Abstract
The estimation of dependencies between multiple variables is a central problem
in the analysis of financial time series. A common approach is to express these
dependencies in terms of a copula function. Typically the copula function is assumed to be constant but this may be inaccurate when there are covariates that
could have a large influence on the dependence structure of the data. To account
for this, a Bayesian framework for the estimation of conditional copulas is proposed. In this framework the parameters of a copula are non-linearly related to
some arbitrary conditioning variables. We evaluate the ability of our method to
predict time-varying dependencies on several equities and currencies and observe
consistent performance gains compared to static copula models and other timevarying copula methods.
1
Introduction
Understanding dependencies within multivariate data is a central problem in the analysis of financial
time series, underpinning common tasks such as portfolio construction and calculation of value-atrisk. Classical methods estimate these dependencies in terms of a covariance matrix (possibly time
varying) which is induced from the data [4, 5, 7, 1]. However, a more general approach is to use
copula functions to model dependencies [6]. Copulas have become popular since they separate
the estimation of marginal distributions from the estimation of the dependence structure, which is
completely determined by the copula.
The use of standard copula methods to estimate dependencies is likely to be inaccurate when the
actual dependencies are strongly influenced by other covariates. For example, dependencies can
vary with time or be affected by observations of other time series. Standard copula methods cannot handle such conditional dependencies. To address this limitation, we propose a probabilistic
framework to estimate conditional copulas. Specifically we assume parametric copulas whose parameters are specified by unknown non-linear functions of arbitrary conditioning variables. These
latent functions are approximated using Gaussian processes (GP) [17].
GPs have previously been used to model conditional copulas in [12] but that work only applies
to copulas specified by a single parameter. We extend this work to accommodate copulas with
multiple parameters. This is an important improvement since it allows the use of a richer set of
copulas including Student?s t and asymmetric copulas. We demonstrate our method by choosing the
conditioning variables to be time and evaluating its ability to estimate time-varying dependencies
1
0.2
0.4
0.6
0.8
0.6
0.2
0.4
0.6
0.4
0.2
0.2
0.4
0.6
0.8
Symmetrized Joe Clayton Copula
0.8
Student's t Copula
0.8
Gaussian Copula
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
Figure 1: Left, Gaussian copula density for ? = 0.3. Middle, Student?s t copula density for ? = 0.3
and ? = 1. Right, symmetrized Joe Clayton copula density for ? U = 0.1 and ? L = 0.6. The latter
copula model is asymmetric along the main diagonal of the unit square.
on several currency and equity time series. Our method achieves consistently superior predictive
performance compared to static copula models and other dynamic copula methods. These include
models that allow their parameters to change with time, e.g. regime switching models [11] and
methods proposing GARCH-style updates to copula parameters [20, 11].
2
Copulas and Conditional Copulas
Copulas provide a powerful framework for the construction of multivariate probabilistic models by
separating the modeling of univariate marginal distributions from the modeling of dependencies
between variables [6]. We focus on bivariate copulas since higher dimensional copulas are typically
constructed using bivariate copulas as building blocks [e.g 2, 12].
Sklar?s theorem [18] states that given two one-dimensional random variables, X and Y , with continuous marginal cumulative distribution functions (cdfs) FX (X) and FY (Y ), we can express their
joint cdf FX,Y as FX,Y (x, y) = CX,Y [FX (x), FY (y)], where CX,Y is the unique copula for X and
Y . Since FX (X) and FY (Y ) are marginally uniformly distributed on [0, 1], CX,Y is the cdf of a
probability distribution on the unit square [0, 1] ? [0, 1] with uniform marginals. Figure 1 shows
plots of the copula densities for three parametric copula models: Gaussian, Student?s t and the symmetrized Joe Clayton (SJC) copulas. Copula models can be learnt in a two step process [10]. First,
the marginals FX and FY are learnt by fitting univariate models. Second, the data are mapped to
the unit square by U = FX (X), V = FY (Y ) (i.e. a probability integral transform) and then CX,Y
is then fit to the transformed data.
2.1
Conditional Copulas
When one has access to a covariate vector Z, one may wish to estimate a conditional version of a
copula model i.e.
FX,Y |Z (x, y|z) = CX,Y |Z FX|Z (x|z), FY |Z (y|z)|z .
(1)
Here, the same two-step estimation process can be used to estimate FX,Y |Z (x, y|z). The estimation
of the marginals FX|Z and FY |Z can be implemented using standard methods for univariate conditional distribution estimation. However, the estimation of CX,Y |Z is constrained to have uniform
marginal distributions; this is a problem that has only been considered recently [12]. We propose a
general Bayesian non-parametric framework for the estimation of conditional copulas based on GPs
and an alternating expectation propagation (EP) algorithm for efficient approximate inference.
3
Gaussian Process Conditional Copulas
Let DZ = {zi }ni=1 and DU,V = {(ui , vi )}ni=1 where (ui , vi ) is a sample drawn from CX,Y |zi .
We assume that CX,Y |Z is a parametric copula model Cpar [u, v|?1 (z), . . . , ?k (z)] specified by k
parameters ?1 , . . . , ?k that may be functions of the conditioning variable z. Let ?i (z) = ?i [fi (z)],
2
where fi is an arbitrary real function and ?i is a function that maps the real line to a set ?i of valid
configurations for ?i . For example, Cpar could be a Student?s t copula. In this case, k = 2 and ?1
and ?2 are the correlation and the degrees of freedom in the Student?s t copula, ?1 = (?1, 1) and
?2 = (0, ?). One could then choose ?1 (?) = 2?(?) ? 1, where ? is the standard Gaussian cdf and
?2 (?) = exp(?) to satisfy the constraint sets ?1 and ?2 respectively.
Once we have specified the parametric form of Cpar and the mapping functions ?1 , . . . , ?k , we need
to learn the latent functions f1 , . . . , fk . We perform a Bayesian non-parametric analysis by placing
GP priors on these functions and computing their posterior distribution given the observed data.
Let fi = (fi (z1 ), . . . , fi (zn ))T . The prior distribution for fi given DZ is p(fi |DZ ) = N (fi |mi , Ki ),
where mi = (mi (z1 ), . . . , mi (zn ))T for some mean function mi (z) and Ki is an n ? n covariance
matrix generated by the squared exponential covariance function, i.e.
[Ki ]jk = Cov[fi (zj ), fi (zk )] = ?i exp ?(zj ? zk )T diag(?i )(zj ? zk ) + ?i ,
(2)
where ?i is a vector of inverse length-scales and ?i , ?i are amplitude and noise parameters. The
posterior distribution for f1 , . . . , fk given DU,V and DZ is
hQ
p(f1 , . . . , fk |DU,V , DZ ) =
n
i=1
i
i h Qk
cpar ui , vi |?1 [f1 (zi )] , . . . , ?k [fk (zi )]
i=1 N (fi |mi , Ki )
p(DU,V |DZ )
, (3)
where cpar is the density of the parametric copula model and p(DU,V |DZ ) is a normalization constant often called the model evidence. Given a particular value of Z denoted by z? , we can make
predictions about the conditional distribution of U and V using the standard GP prediction formula
Z
? ? ?
p(u , v |z ) = cpar (u? , v ? |?1 [f1? ], . . . , ?k [fk? ])p(f ? |f1 , . . . , fk , z? , Dz )
p(f1 , . . . , fk |DU,V , DZ ) df1 ? ? ? dfk df ? ,
(4)
Q
k
?
?
?
?
where f ? = (f1? , . . . , fk? )T , p(f ? |f1 , . . . , fk , z? , Dz ) =
i=1 p(fi |fi , z , Dz ), fi = fi (z ),
?
?
?
?
T ?1
T ?1
?
?
p(fi |fi , z , Dz ) = N (fi |mi (z ) + ki Ki (fi ? mi ), ki ? ki Ki ki ), ki = Cov[fi (z ), fi (z )]
and ki = (Cov[fi (z? ), fi (z1 )], . . . , Cov[fi (z? ), fi (zn )])T . Unfortunately, (3) and (4) cannot be
computed analytically, so we approximate them using expectation propagation (EP) [13].
3.1
An Alternating EP Algorithm for Approximate Bayesian Inference
The joint distribution for f1 , . . . , fk and DU,V given DZ can be written as a product of n + k factors:
" n
#" k
#
Y
Y
p(f1 , . . . , fk , DU,V |DZ ) =
gi (f1i , . . . , fki , )
hi (fi ) ,
(5)
i=1
i=1
where fji = fj (zi ), hi (fi ) = N (fi |mi , Ki ) and gi (f1i , . . . , fki ) = cpar [ui , vi |?1 [f1i ], . . . , ?k [fki ]].
EP approximates each factor gi withan approximate Gaussian
factor g?i that may not integrate to one,
Qk
2
i.e. g?i (f1i , . . . , fki ) = si j=1 exp ?(fji ? m
? ji ) /[2?
vji ] , where si > 0, m
? ji and v?ji are parameters to be calculated by EP. The other factors hi already have a Gaussian form so they do not need
to be approximated. Since all the g?i and hi are Gaussian, their product is, up to a normalization constant, a multivariate Gaussian distribution q(f1 , . . . , fk ) which approximates the exact posterior (3)
and factorizes across f1 , . . . , fk . The predictive distribution (4) is approximated by first integrating
p(f ? |f1 , . . . , fk , z? , Dz ) with respect to q(f1 , . . . , fk ). This results in a factorized Gaussian distribution q ? (f ? ) which approximates p(f ? |DU,V , DZ ). Finally, (4) is approximated by Monte-Carlo by
sampling from q ? and then averaging cpar (u? , v ? |?1 [f1? ], . . . , ?k [fk? ]) over the samples.
EP iteratively updates each g?i until convergence by first computing q \i ? q/?
gi and then minimizing
the Kullback-Leibler divergence [3] between gi q \i and g?i q \i . This involves updating g?i so that the
first and second marginal moments of gi q \i and g?i q \i match. However, it is not possible to compute
the moments of gi q \i analytically due to the complicated form of gi . A solution is to use numerical
methods to compute these k-dimensional integrals. However, this typically has an exponentially
large computational cost in k which is prohibitive for k > 1. Instead we perform an additional
approximation when computing the marginal moments of fji with respect to gi q \i . Without loss of
3
generality, assume that we want to compute the expectation of f1i with respect to gi q \i . We make
the following approximation:
Z
f1i gi (f1i , . . . , fki )q \i (f1i , . . . ,fki ) df1i , . . . , dfki ?
Z
C ? f1i gi (f1i , f?2i , . . . , f?ki )q \i (f1i , f?2i , . . . , f?ki ) df1i ,
(6)
where f?1i , . . . , f?ki are the means of f1i , . . . , fki with respect to q \i , and C is a constant that approximates the width of the integrand around its maximum in all dimensions except f1i . In practice all
moments are normalized by the 0-th moment so C can be ignored. The right hand side of (6) is a onedimensional integral that can be easily computed using numerical techniques. The approximation
above is similar to approximating an integral by the product of the maximum value of the integrand
and an estimate of its width. However, instead of maximizing gi (f1i , . . . , fki )q \i (f1i , . . . , fki ) with
respect to f2i , . . . , fki , we are maximizing q \i . This is a much easier task because q \i is Gaussian
and its maximizer is its own mean vector. In practice, gi (f1i , . . . , fki ) is very flat when compared to
q \i and the maximizer of q \i approximates well the maximizer of gi (f1i , . . . , fki )q \i (f1i , . . . , fki ).
Since q factorizes across f1 , . . . , fk (as well as q \i ), our implementation of EP decouples into k EP
sub-routines among which we alternate; the j-th sub-routine approximates the posterior distribution
of fj using as input the means of q \i generated by the other EP sub-routines. Each sub-routine finds
a Gaussian approximation to a set of n one-dimensional factors; one factor per data point. In the
j-th EP sub-routine, the i-th factor is given by gi (f1i , . . . , fki ), where each {f1i , . . . , fki } \ {fji }
is kept fixed to the current mean of q \i , as estimated by the other EP sub-routines. We iteratively
alternate between sub-routines, running each one until convergence before re-running the next one.
Convergence is achieved very quickly; we only run each EP sub-routine four times.
The EP sub-routines are implemented using the parallel EP update scheme described in [21]. To
speed up GP related computations, we use the generalized FITC approximation [19, 14]: Each
n ? n covariance matrix Ki is approximated by K0i = Qi + diag(Ki ? Qi ), where Qi =
Kinn0 [Kin0 n0 ]?1 [Kinn0 ]T , Kin0 n0 is the n0 ? n0 covariance matrix generated by evaluating (2) at
n0 n pseudo-inputs, and Kinn0 is the n ? n0 matrix with the covariances between training points
and pseudo-inputs. The cost of EP is O(knn20 ). Each time we call the j-th EP subroutine, we optimize the corresponding kernel hyper-parameters ?j , ?j and ?j and the pseudo-inputs by maximizing
the EP approximation of the model evidence [17].
4
Related Work
The model proposed here is an extension of the conditional copula model of [12]. In the case of
bivariate data and a copula based on one parameter the models are identical. We have extended the
approximate inference for this model to accommodate copulas with multiple parameters; previously
computationally infeasible due to requiring the numerical calculation of multidimensional integrals
within an inner loop of EP inference. We have also demonstrated that one can use this model to
produce excellent predictive results on financial time series by conditioning the copula on time.
4.1
Dynamic Copula Models
In [11] a dynamic copula model is proposed based on a two-state hidden Markov model (HMM)
(St ? {0, 1}) that assumes that the data generating process changes between two regimes of
low/high correlation. At any time t the copula density is Student?s t with different parameters for
the two values of the hidden state St . Maximum likelihood estimation of the copula parameters and
transition probabilities is performed using an EM algorithm [e.g. 3].
A time-varying correlation (TVC) model based on the Student?s t copula is described in [20, 11].
The correlation parameter1 of a Student?s t copula is assumed to satisfy ?t = (1 ? ? ? ?)? +
??t?1 + ??t?1 , where ?t?1 is the empirical correlation of the previous 10 observations and ?, ?
and ? satisfy ?1 ? ? ? 1, 0 ? ?, ? ? 1 and ? + ? ? 1. The number of degrees of freedom ?
4
is assumed to be constant. The previous formula is the GARCH equation for correlation instead of
variance. Estimation of ?, ?, ? and ? is easily performed by maximum likelihood.
In [15] a dynamic copula based on the SJC copula (DSJCC) is introduced. In this method, the
parameters ? U and ? L of an SJC copula are assumed to depend on time according to
? U (t) = 0.01 + 0.98? ?U + ?U ?t?1 + ?U ? U (t ? 1) ,
(7)
L
L
? (t) = 0.01 + 0.98? ?L + ?L ?t?1 + ?L ? (t ? 1) ,
(8)
P10
1
where ?[?] is the logistic function, ?t?1 = 10
j=1 |ut?j ? vt?j |, (ut , vt ) is a copula sample at
time t and the constants are used to avoid numerical instabilities. These formulae are the GARCH
equation for correlations, with an additional logistic function to constrain parameter values. The
estimation of ?U , ?U , ?U , ?L , ?L and ?L is performed by maximum likelihood.
We go beyond this prior work by allowing copula parameters to depend on an arbitrary conditioning
variables rather than time alone. Also, the models above either assume Markov independence or
GARCH-like updates to copula parameters. These assumptions have been empirically proven to
be effective for the estimation of univariate variances, but the consistent performance gains of our
proposed method suggest these assumptions are less applicable for the estimation of dependencies.
4.2
Other Dynamic Covariance Models
A direct extension of the GARCH equations to multiple time series, VEC, was proposed by [5].
Let x(t) be a multivariate time series assumed to satisfy x(t) ? N (0, ?(t)). VEC(p, q) models the
dynamics of ?(t) by an equation of the form
vech(?(t)) = c +
p
X
Ak vech(x(t ? k)x(t ? k)T ) +
k=1
q
X
Bk vech(?(t ? k))
(9)
k=1
where vech is the operation that stacks the lower triangular part on a matrix into a column vector.
The VEC model has a very large number of parameters and hence a more commonly used model is
the BEKK(p, q) model [7] which assumes the following dynamics
T
?(t) = C C +
p
X
ATk x(t
q
X
T
? k)x(t ? k) Ak +
k=1
BkT ?(t ? k)Bk .
(10)
k=1
This model also has many parameters and many restricted versions of these models have been proposed to avoid over-fitting (see e.g. section 2 of [1]).
An alternative solution to over-fitting due to over-parameterization is the Bayesian approach of [23]
where Bayesian inference is performed in a dynamic BEKK(1, 1) model. Other Bayesian approaches
include the non-parametric generalized Wishart process [22, 8]. In these works ?(t) is modeled by
a generalized Wishart process i.e.
?(t) =
?
X
Lui (t)ui (t)T LT
(11)
i=1
where uid (?) are distributed as independent GPs.
5
Experiments
We evaluate the proposed Gaussian process conditional copula models (GPCC) on a one-step-ahead
prediction task with synthetic data and financial time series. We use time as the conditioning variable and consider three parametric copula families; Gaussian (GPCC-G), Student?s t (GPCC-T) and
symmetrized Joe Clayton (GPCC-SJC). The parameters of these copulas are presented in Table 1
along with the transformations used to model them. Figure 1 shows plots of the densities of these
three parametric copula models. The code and data are publicly available at http://jmhl.org.
1
The parameterization used in this paper is related by ? = sin(0.5? ?)
5
Copula
Gaussian
Student?s t
SJC
Parameters
correlation, ?
correlation, ?
degrees of freedom, ?
upper dependence, ? U
lower dependence, ? L
Transformation
0.99(2?[f (t)] ? 1)
0.99(2?[f (t)] ? 1)
1 + 106 ?[g(t)]
0.01 + 0.98?[g(t)]
0.01 + 0.98?[g(t)]
Synthetic parameter function
? (t) = 0.3 + 0.2 cos(t?/125)
? (t) = 0.3 + 0.2 cos(t?/125)
?(t) = 1 + 2(1 + cos(t?/250))
? U (t) = 0.1 + 0.3(1 + cos(t?/125))
L
? (t) = 0.1 + 0.3(1 + cos(t?/125 + ?/2))
Table 1: Copula parameters, modeling formulae and parameter functions used to generate synthetic
data. ? is the standard Gaussian cumulative density function f and g are GPs.
The three variants of GPCC were compared against three dynamic copula methods and three constant copula models. The three dynamic methods include the HMM based model, TVC and DSJCC
introduced in Section 4. The three constant copula models use Gaussian, Student?s t and SJC copulas
with parameter values that do not change with time (CONST-G, CONST-T and CONST-SJC). We
perform a one-step-ahead rolling-window prediction task on bivariate time series {(ut , vt )}. Each
model is trained on the first nW data points and the predictive log-likelihood of the (nW +1)?th data
point is recorded, where nW = 1000. This is then repeated, shifting the training and test windows
forward by one data point. The methods are then compared by average predictive log-likelihood; an
appropriate performance measure for copula estimation since copulas are probability distributions.
5.1
Synthetic Data
We generated three synthetic datasets of length 5001 from copula models (Gaussian, Student?s t,
SJC) whose parameters vary as periodic functions of time, as specified in Table 1. Table 2 reports
the average predictive log-likelihood for each method on each synthetic time series. The results of
the best performing method on each synthetic time series are shown in bold. The results of any
other method are underlined when the differences with respect to the best performing method are
not statistically significant according to a paired t test at ? = 0.05.
GPCC-T and GPCC-SJC obtain the best results in the Student?s t and SJC time series respectively.
However, HMM is the best performing method for the Gaussian time series. This technique successfully captures the two regimes of low/high correlation corresponding to the peaks and troughs
of the sinusoid that maps time t to correlation ? . The proposed methods GPCC-[G,T,SJC] are more
flexible and hence less efficient than HMM in this particular problem. However, HMM performs
significantly worse in the Student?s t and SJC time series since the different periods for the different
copula parameter functions cannot be captured by a two state model. Figure 2 shows how GPCC-T
successfully tracks ? (t) and ?(t) in the Student?s t time series. The plots display the mean (red) and
confidence bands (orange, 0.1 and 0.9 quantiles) for the predictive distribution of ? (t) and ?(t) as
well as the ground truth values (blue). Finally, Table 2 also shows that the static copula methods
CONST-[G,T,SJC] are usually outperformed by all dynamic techniques GPCC-[G,T,SJC], DSJCC,
TVC and HMM.
5.2
Foreign Exchange Time Series
We evaluated each method on the daily logarithmic returns of nine currencies shown in Table 3 (all
priced with respect to the U.S. dollar).The date range of the data is 02-01-1990 to 15-01-2013; a
total of 6011 observations. We evaluated the methods on eight bivariate time series, pairing each
currency pair with the Swiss franc (CHF). CHF is known to be a safe haven currency, meaning that
investors flock to it during times of uncertainty [16]. Consequently we expect correlations between
CHF and other currencies to have large variability across time in response to changes in financial
conditions.
We first process our data using an asymmetric AR(1)-GARCH(1,1) process with non-parametric
innovations [9] to estimate the univariate marginal cdfs at all time points. We train this GARCH
model on nW = 2016 data points and then predict the cdf of the next data point; subsequent cdfs
are predicted by shifting the training window by one data point in a rolling-window methodology.
The cdf estimates are used to transform the raw logarithmic returns (xt , yt ) into a pseudo-sample
of the underlying copula (ut , vt ) as described in Section 2. We note that any method for predicting
univariate cdfs could have been used to produce pseudo-samples from the copula. We then perform
6
the rolling-window predictive likelihood experiment on the transformed data. The results are shown
in Table 4; overall the best technique is GPCC-T, followed by GPCC-G. The dynamic copula methods GPCC-[G,T,SJC], HMM, and TVC outperform the static methods CONST-[G,T,SJC] in all the
analyzed series. The dynamic method DSJCC occasionally performed poorly; worse than the static
methods for 3 experiments.
Student's t Time Series
Student's t Time Series,
Mean GPCC?T
Ground truth
0.2
5
0.4
0.6
15
0.8
Mean GPCC?T
Ground truth
0
200
400
600
800
1000
0
200
400
600
800
Method
GPCC-G
GPCC-T
GPCC-SJC
HMM
TVC
DSJCC
CONST-G
CONST-T
CONST-SJC
Gaussian
0.3347
0.3397
0.3355
0.3555
0.3277
0.3329
0.3129
0.3178
0.3002
Student
0.3879
0.4656
0.4132
0.4422
0.4273
0.4096
0.3201
0.4218
0.3812
SJC
0.2513
0.2610
0.2771
0.2547
0.2534
0.2612
0.2339
0.2499
0.2502
1000
Figure 2: Predictions made by GPCC-T for ?(t) and ? (t) on Table 2: Avg. test log-likelihood of
the synthetic time series sampled from a Student?s t copula. each method on each time series.
Currency Name
Swiss Franc
Australian Dollar
Canadian Dollar
Japanese Yen
Norwegian Krone
Swedish Krona
Euro
New Zeland Dollar
British Pound
Table 3: Currencies.
Method
GPCC-G
GPCC-T
GPCC-SJC
HMM
TVC
DSJCC
CONST-G
CONST-T
CONST-SJC
CAD
0.0562
0.0589
0.0469
0.0478
0.0524
0.0259
0.0398
0.0463
0.0425
JPY
0.1221
0.1201
0.1064
0.1009
0.1038
0.0891
0.0771
0.0898
0.0852
NOK
0.4106
0.4161
0.3941
0.4069
0.3930
0.3994
0.3413
0.3765
0.3536
SEK
0.4132
0.4192
0.3905
0.3955
0.3878
0.3937
0.3426
0.3760
0.3544
EUR
0.8842
0.8995
0.8287
0.8700
0.7855
0.8335
0.6803
0.7732
0.7113
GBP
0.2487
0.2514
0.2404
0.2374
0.2301
0.2320
0.2085
0.2231
0.2165
NZD
0.1045
0.1079
0.0921
0.0926
0.0974
0.0560
0.0745
0.0875
0.0796
Table 4: Avg. test log-likelihood of each method on the currency data.
EUR?CHF GPCC-T,
1.0
EUR?CHF GPCC-T,
Mean GPCC?T
EUR?CHF GPCC-SJC,
Mean GPCC?T
Mean GPCC-SJC
Mean GPCC-SJC
0
0.8
0.6
0.4
0.2
0.3
20
0.4
40
0.5
60
0.6
80
0.7
100
0.8
1.0
120
0.9
140
AUD
0.1260
0.1319
0.1168
0.1164
0.1181
0.0798
0.0925
0.1078
0.1000
1.2
Code
CHF
AUD
CAD
JPY
NOK
SEK
EUR
NZD
GBP
Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09
Oct 09 Mar 10 Aug 10
Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09
Oct 09 Mar 10 Aug 10
Oct 06 Mar 07 Aug 07 Jan 08 Jun 08 Nov 08 Apr 09
Oct 09 Mar 10 Aug 10
Figure 3: Left and middle, predictions made by GPCC-T for ?(t) and ? (t) on the time series EURCHF when trained on data from 10-10-2006 to 09-08-2010. There is a significant reduction in ?(t)
at the onset of the 2008-2012 global recession. Right, predictions made by GPCC-SJC for ? U (t) and
? L (t) when trained on the same time-series data. The predictions for ? L (t) are much more erratic
than those for ? U (t).
The proposed method GPCC-T can capture changes across time in the parameters of the Student?s t
copula. The left and middle plots in Figure 3 show predictions for ?(t) and ? (t) generated by GPCCT. In the left plot, we observe a reduction in ?(t) at the onset of the 2008-2012 global recession
indicating that the return series became more prone to outliers. The plot for ? (t) (middle) also
shows large changes across time. In particular, we observe large drops in the dependence level
between EUR-USD and CHF-USD during the fall of 2008 (at the onset of the global recession) and
the summer of 2010 (corresponding to the worsening European sovereign debt crisis).
For comparison, we include predictions for ? L (t) and ? U (t) made by GPCC-SJC in the right plot
of Figure 3. In this case, the prediction for ? U (t) is similar to the one made by GPCC-T for ? (t),
7
but the prediction for ? L (t) is much noisier and erratic. This suggests that GPCC-SJC is less robust
than GPCC-T. All the copula densities in Figure 1 take large values in the proximity of the points
(0,0) and (1,1) i.e. positive correlation. However, the Student?s t copula is the only one of these
three copulas which can take high values in the proximity of the points (0,1) and (1,0) i.e. negative
correlation. The plot in the left of Figure 3 shows how ?(t) takes very low values at the end of the
time period, increasing the robustness of GPCC-T to negatively correlated outliers.
5.3
Equity Time Series
As a further comparison, we evaluated each method on the logarithmic returns of 8 equity pairs, from
the same date range and processed using the same AR(1)-GARCH(1,1) model discussed previously.
The equities were chosen to include pairs with both high correlation (e.g. RBS and BARC) and low
correlation (e.g. AXP and BA).
The results are shown in Table 5; again the best technique is GPCC-T, followed by GPCC-G.
RBS?BARC GPCC-T
Mean GPCC?T
0
10
20
30
40
50
Method
GPCC-G
GPCC-T
GPCC-SJC
HMM
TVC
DSJCC
CONST-G
CONST-T
CONST-SJC
Apr 09 Sep 09
Aug 10 Jan 11
Jul 11
HD
HON
0.1247
0.1289
0.1210
0.1260
0.1251
0.0935
0.1162
0.1239
0.1175
AXP
BA
0.1133
0.1187
0.1095
0.1119
0.1119
0.0750
0.1027
0.1091
0.1046
CNW
CSX
0.1450
0.1499
0.1399
0.1458
0.1459
0.1196
0.1288
0.1408
0.1307
ED
EIX
0.2072
0.2059
0.1935
0.2040
0.2011
0.1721
0.1962
0.2007
0.1891
HPQ
IBM
0.1536
0.1591
0.1462
0.1511
0.1511
0.1163
0.1325
0.1481
0.1373
BARC
HSBC
0.2424
0.2486
0.2342
0.2486
0.2449
0.2188
0.2307
0.2426
0.2268
RBS
BARC
0.3401
0.3501
0.3234
0.3414
0.3336
0.3051
0.2979
0.3301
0.2992
RBS
HSBC
0.1860
0.1882
0.1753
0.1818
0.1823
0.1582
0.1663
0.1775
0.1639
Dec 11 Jun 12 Nov 12 Apr 13
Figure 4: Prediction for ?(t) Table 5: Average test log-likelihood for each method on each pair of
on RBS-BARC.
stocks.
Figure 4 shows predictions for ?(t) generated by GPCC-T. We observe low values of ? during
2010 suggesting that a Gaussian copula would be a bad fit to the data. Indeed, GPCC-G performs
significantly worse than GPCC-T on this equity pair.
6
Conclusions and Future Work
We have proposed an inference scheme to fit a conditional copula model to multivariate data where
the copula is specified by multiple parameters. The copula parameters are modeled as unknown nonlinear functions of arbitrary conditioning variables. We evaluated this framework by estimating timevarying copula parameters for bivariate financial time series. Our method consistently outperforms
static copula models and other dynamic copula models.
In this initial investigation we have focused on bivariate copulas. Higher dimensional copulas are
typically constructed using bivariate copulas as building blocks [2, 12]. Our framework could be
applied to these constructions and our empirical predictive performance gains will likely transfer to
this setting. Evaluating the effectiveness of this approach compared to other models of multivariate
covariance would be a profitable area of empirical research.
One could also extend the analysis presented here by including additional conditioning variables
as well as time. For example, including a prediction of univariate volatility as a conditioning variable would allow copula parameters to change in response to changing volatility. This would pose
inference challenges as the dimension of the GP increases, but could create richer models.
Acknowledgements
We thank David L?opez-Paz and Andrew Gordon Wilson for interesting discussions. Jos?e Miguel
Hern?andez-Lobato acknowledges support from Infosys Labs, Infosys Limited. Daniel HernandezLobato acknowledges support from the Spanish Direcci?on General de Investigaci?on, project ALLS
(TIN2010-21575-C02-02).
8
References
[1] L. Bauwens, S. Laurent, and J. V. K. Rombouts. Multivariate GARCH models: a survey. Journal of
Applied Econometrics, 21(1):79?109, 2006.
[2] T. Bedford and R. M. Cooke. Probability density decomposition for conditionally dependent random
variables modeled by vines. Annals of Mathematics and Artificial Intelligence, 32(1-4):245?268, 2001.
[3] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer,
2007.
[4] T. Bollerslev. Generalized autoregressive conditional heteroskedasticity.
31(3):307?327, 1986.
Journal of Econometrics,
[5] T. Bollerslev, R. F. Engle, and J. M. Wooldridge. A capital asset pricing model with time-varying covariances. The Journal of Political Economy, pages 116?131, 1988.
[6] G. Elidan. Copulas and machine learning. In Invited survey to appear in the proceedings of the Copulae
in Mathematical and Quantitative Finance workshop, 2012.
[7] R. F. Engle and K. F. Kroner. Multivariate simultaneous generalized ARCH. Econometric theory,
11(1):122?150, 1995.
[8] E. B. Fox and D. B. Dunson. Bayesian nonparametric covariance regression. arXiv:1101.2017, 2011.
[9] J. M. Hern?andez-Lobato, D. Hern?andez-Lobato, and A. Su?arez. GARCH processes with non-parametric
innovations for market risk estimation. In Artificial Neural Networks ICANN 2007, volume 4669 of
Lecture Notes in Computer Science, pages 718?727. Springer Berlin Heidelberg, 2007.
[10] H. Joe. Asymptotic efficiency of the two-stage estimation method for copula-based models. Journal of
Multivariate Analysis, 94(2):401?419, 2005.
[11] E. Jondeau and M. Rockinger. The Copula-GARCH model of conditional dependencies: An international
stock market application. Journal of International Money and Finance, 25(5):827?853, 2006.
[12] D. Lopez-Paz, J. M. Hern?andez-Lobato, and Z. Ghahramani. Gaussian process vine copulas for multivariate dependence. In S Dasgupta and D McAllester, editors, JMLR W&CP 28(2): Proceedings of The
30th International Conference on Machine Learning, pages 10?18. JMLR, 2013.
[13] T. P. Minka. Expectation Propagation for approximate Bayesian inference. Proceedings of the 17th
Conference in Uncertainty in Artificial Intelligence, pages 362?369, 2001.
[14] A. Naish-Guzman and S. Holden. The generalized fitc approximation. In J.C. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1057?1064. MIT
Press, Cambridge, MA, 2008.
[15] A. J. Patton. Modelling asymmetric exchange rate dependence.
47(2):527?556, 2006.
International Economic Review,
[16] A. Ranaldo and P. S?oderlind. Safe haven currencies. Review of Finance, 14(3):385?407, 2010.
[17] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[18] A. Sklar. Fonctions de r?epartition a` n dimensions et leurs marges. Publ. Inst. Statis. Univ. Paris, 8(1):229?
231, 1959.
[19] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In Y. Weiss, B. Sch?olkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1257?1264. MIT
Press, Cambridge, MA, 2006.
[20] Y. K. Tse and A. K. C. Tsui. A multivariate generalized autoregressive conditional heteroscedasticity
model with time-varying correlations. Journal of Business & Economic Statistics, 20(3):351?362, 2002.
[21] M. A. J. van Gerven, B. Cseke, F. P. de Lange, and T. Heskes. Efficient bayesian multivariate fmri analysis
using a sparsifying spatio-temporal prior. NeuroImage, 50(1):150?161, 2010.
[22] A. G. Wilson and Z. Ghahramani. Generalised Wishart processes. In F. Cozman and A. Pfeffer, editors,
Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), Barcelona, Spain, 2011. AUAI Press.
[23] Y. Wu, J. M. Hernandez-Lobato, and Z. Ghahramani. Dynamic covariance models for multivariate financial time series. In S. Dasgupta and D. McAllester, editors, Proceedings of the 30th International
Conference on Machine Learning (ICML-13), volume 28, pages 558?566. JMLR Workshop and Conference Proceedings, 2013.
9
| 5084 |@word version:2 middle:4 csx:1 covariance:11 decomposition:1 accommodate:2 reduction:2 moment:5 configuration:1 series:29 initial:1 daniel:3 outperforms:1 current:1 parameter1:1 cad:2 si:2 worsening:1 written:1 numerical:4 subsequent:1 plot:8 drop:1 update:4 n0:6 statis:1 alone:1 intelligence:3 prohibitive:1 parameterization:2 org:1 mathematical:1 along:2 constructed:2 direct:1 become:1 pairing:1 lopez:1 fitting:3 market:2 indeed:1 actual:1 window:5 increasing:1 project:1 estimating:1 underlying:1 spain:1 factorized:1 crisis:1 proposing:1 transformation:2 pseudo:6 quantitative:1 temporal:1 multidimensional:1 auai:1 finance:3 decouples:1 uk:2 platt:2 unit:3 appear:1 generalised:1 before:1 positive:1 engineering:2 switching:1 ak:2 laurent:1 hernandez:2 suggests:1 co:5 limited:1 cdfs:4 range:2 statistically:1 unique:1 practice:2 block:2 swiss:2 jan:4 area:1 empirical:3 significantly:2 confidence:1 integrating:1 suggest:1 cannot:3 risk:1 influence:1 instability:1 optimize:1 map:2 demonstrated:1 dz:16 lobato:7 maximizing:3 go:1 yt:1 williams:1 focused:1 survey:2 sjc:29 onoma:1 eix:1 financial:8 hd:1 jmh233:1 handle:1 fx:11 profitable:1 annals:1 construction:3 exact:1 gps:4 jmhl:1 approximated:5 jk:1 updating:1 recognition:1 asymmetric:4 econometrics:2 pfeffer:1 ep:18 observed:1 infosys:2 capture:2 vine:2 ui:5 covariates:2 cam:2 dynamic:15 trained:3 depend:2 heteroscedasticity:1 predictive:9 negatively:1 efficiency:1 completely:1 easily:2 joint:2 sep:1 stock:2 arez:1 cozman:1 sklar:2 train:1 univ:1 effective:1 monte:1 artificial:4 hyper:1 choosing:1 whose:2 richer:2 triangular:1 ability:2 cov:4 gi:16 statistic:2 gp:5 transform:2 propose:2 product:3 loop:1 date:2 poorly:1 roweis:1 olkopf:1 bollerslev:2 convergence:3 produce:2 generating:1 volatility:2 andrew:1 ac:2 pose:1 miguel:2 aug:7 implemented:2 predicted:1 involves:1 australian:1 aud:2 safe:2 mcallester:2 atk:1 exchange:2 f1:17 andez:6 investigation:1 extension:2 underpinning:1 proximity:2 around:1 considered:1 ground:3 exp:3 mapping:1 predict:2 nw:4 kin0:2 vary:2 achieves:1 estimation:17 outperformed:1 applicable:1 create:1 successfully:2 mit:3 gaussian:26 rather:1 avoid:2 varying:6 factorizes:2 timevarying:2 wilson:2 cseke:1 focus:1 improvement:1 consistently:2 modelling:1 likelihood:10 political:1 dollar:4 inst:1 inference:8 economy:1 dependent:1 foreign:1 inaccurate:2 typically:4 holden:1 investigaci:1 hidden:2 koller:1 transformed:2 subroutine:1 overall:1 among:1 flexible:1 hon:1 denoted:1 constrained:1 copula:103 orange:1 marginal:7 once:1 sampling:1 identical:1 placing:1 icml:1 fmri:1 future:1 report:1 guzman:1 gordon:1 haven:2 franc:2 divergence:1 freedom:3 analyzed:1 integral:5 daily:1 fox:1 re:1 column:1 modeling:3 tse:1 bedford:1 ar:2 zn:3 cost:2 leurs:1 rolling:3 uniform:2 paz:2 seventh:1 dependency:14 periodic:1 learnt:2 synthetic:8 krone:2 st:2 density:10 peak:1 eur:6 international:5 probabilistic:2 universidad:1 jos:2 quickly:1 squared:1 central:2 recorded:1 again:1 choose:1 possibly:1 usd:2 wishart:3 worse:3 style:1 return:4 account:1 suggesting:1 wooldridge:1 de:4 lloyd:1 student:22 bold:1 trough:1 satisfy:4 vi:4 onset:3 performed:5 lab:1 red:1 investor:1 complicated:1 parallel:1 jul:1 yen:1 square:3 ni:2 publicly:1 became:1 qk:2 variance:2 bayesian:10 raw:1 fki:15 marginally:1 carlo:1 asset:1 simultaneous:1 influenced:1 opez:1 ed:1 against:1 james:1 minka:1 mi:9 static:6 gain:3 sampled:1 popular:1 ut:4 amplitude:1 routine:9 higher:2 axp:2 methodology:1 response:2 wei:1 swedish:1 evaluated:4 pound:1 strongly:1 generality:1 mar:6 stage:1 arch:1 correlation:17 until:2 hand:1 su:1 nonlinear:1 maximizer:3 propagation:3 logistic:2 pricing:1 building:2 name:1 normalized:1 requiring:1 analytically:2 hence:2 sinusoid:1 alternating:2 iteratively:2 leibler:1 conditionally:1 sin:1 during:3 width:2 spanish:1 generalized:7 demonstrate:1 performs:2 cp:1 fj:2 meaning:1 snelson:1 recently:1 fi:28 common:2 superior:1 sek:2 ji:3 empirically:1 conditioning:10 exponentially:1 volume:2 extend:2 discussed:1 approximates:6 marginals:3 onedimensional:1 significant:2 cambridge:4 vec:3 fonctions:1 fk:17 mathematics:1 heskes:1 portfolio:1 flock:1 access:1 money:1 recession:3 heteroskedasticity:1 multivariate:13 posterior:4 own:1 occasionally:1 underlined:1 vt:4 garch:11 p10:1 captured:1 aut:1 additional:3 period:2 elidan:1 multiple:5 currency:10 match:1 calculation:2 paired:1 qi:3 prediction:15 variant:1 regression:1 expectation:4 df:1 arxiv:1 normalization:2 dfki:1 kernel:1 achieved:1 dec:1 want:1 sch:1 invited:1 induced:1 effectiveness:1 call:1 gerven:1 canadian:1 naish:1 independence:1 fit:3 zi:5 inner:1 economic:2 lange:1 engle:2 jpy:2 nine:1 ignored:1 nonparametric:1 band:1 processed:1 http:1 generate:1 outperform:1 df1:1 zj:3 estimated:1 per:1 track:1 rb:5 blue:1 dasgupta:2 affected:1 express:2 bekk:2 sparsifying:1 four:1 drawn:1 uid:1 changing:1 capital:1 kept:1 econometric:1 run:1 inverse:1 powerful:1 uncertainty:3 family:1 c02:1 wu:1 ki:18 hi:4 followed:2 summer:1 display:1 annual:1 ahead:2 constraint:1 constrain:1 flat:1 integrand:2 speed:1 performing:3 department:3 according:2 alternate:2 across:5 em:1 vji:1 outlier:2 restricted:1 computationally:1 equation:4 previously:3 hern:6 f1i:20 singer:1 end:1 fji:4 available:1 operation:1 uam:1 eight:1 observe:4 appropriate:1 alternative:1 robustness:1 symmetrized:4 assumes:2 running:2 include:5 bkt:1 const:14 ghahramani:4 approximating:1 classical:1 already:1 parametric:12 dependence:7 diagonal:1 vech:4 rombouts:1 hq:1 separate:1 mapped:1 separating:1 thank:1 hmm:10 berlin:1 f2i:1 epartition:1 fy:7 length:2 code:2 modeled:3 minimizing:1 innovation:2 unfortunately:1 dunson:1 robert:1 negative:1 ba:2 implementation:1 publ:1 unknown:2 perform:4 allowing:1 upper:1 twenty:1 observation:3 markov:2 datasets:1 extended:1 variability:1 norwegian:1 stack:1 arbitrary:5 clayton:4 introduced:2 bk:2 pair:5 specified:6 david:1 z1:3 paris:1 gbp:2 barcelona:1 address:1 beyond:1 usually:1 pattern:1 regime:3 challenge:1 including:3 debt:1 erratic:2 shifting:2 business:1 predicting:1 scheme:2 fitc:2 acknowledges:2 jun:4 prior:4 understanding:1 acknowledgement:1 review:2 asymptotic:1 loss:1 expect:1 lecture:1 interesting:1 limitation:1 tin2010:1 proven:1 integrate:1 degree:3 consistent:2 editor:5 ibm:1 cooke:1 prone:1 rasmussen:1 infeasible:1 side:1 allow:2 fall:1 patton:1 sparse:1 priced:1 distributed:2 van:1 calculated:1 dimension:3 evaluating:3 cumulative:2 valid:1 transition:1 autoregressive:2 forward:1 commonly:1 made:5 avg:2 approximate:6 nov:4 kullback:1 global:3 uai:1 assumed:5 spatio:1 continuous:1 latent:2 table:12 hpq:1 learn:1 zk:3 robust:1 transfer:1 heidelberg:1 du:9 excellent:1 japanese:1 european:1 diag:2 apr:5 main:1 icann:1 linearly:1 noise:1 repeated:1 direcci:1 euro:1 madrid:1 quantiles:1 sub:9 neuroimage:1 wish:1 exponential:1 jmlr:3 theorem:1 formula:4 british:1 bad:1 xt:1 covariate:1 bishop:1 evidence:2 bivariate:8 workshop:2 joe:5 easier:1 cx:8 lt:1 logarithmic:3 likely:2 univariate:7 applies:1 springer:2 truth:3 cdf:5 ma:2 oct:6 conditional:18 consequently:1 tvc:7 change:7 determined:1 specifically:1 uniformly:1 except:1 averaging:1 lui:1 called:1 total:1 e:1 equity:6 indicating:1 support:2 latter:1 noisier:1 evaluate:2 marge:1 correlated:1 |
4,514 | 5,085 | Bayesian Inference and Learning in Gaussian Process
State-Space Models with Particle MCMC
Roger Frigola1 , Fredrik Lindsten2 , Thomas B. Sch?on2,3 and Carl E. Rasmussen1
1. Dept. of Engineering, University of Cambridge, UK, {rf342,cer54}@cam.ac.uk
2. Div. of Automatic Control, Link?oping University, Sweden, [email protected]
3. Dept. of Information Technology, Uppsala University, Sweden, [email protected]
Abstract
State-space models are successfully used in many areas of science, engineering
and economics to model time series and dynamical systems. We present a fully
Bayesian approach to inference and learning (i.e. state estimation and system
identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model
able to capture complex dynamical phenomena. To enable efficient inference, we
marginalize over the transition dynamics function and, instead, infer directly the
joint smoothing distribution using specially tailored Particle Markov Chain Monte
Carlo samplers. Once a sample from the smoothing distribution is computed,
the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make
use of sparse Gaussian processes to greatly reduce computational complexity.
1
Introduction
State-space models (SSMs) constitute a popular and general class of models in the context of time
series and dynamical systems. Their main feature is the presence of a latent variable, the state
xt ? X , Rnx , which condenses all aspects of the system that can have an impact on its future.
A discrete-time SSM with nonlinear dynamics can be represented as
xt+1 = f (xt , ut ) + vt ,
yt = g(xt , ut ) + et ,
(1a)
(1b)
where ut denotes a known external input, yt denotes the measurements, vt and et denote i.i.d. noises
acting on the dynamics and the measurements, respectively. The function f encodes the dynamics
and g describes the relationship between the observation and the unobserved states.
We are primarily concerned with the problem of learning general nonlinear SSMs. The aim is to
find a model that can adaptively increase its complexity when more data is available. To this effect,
we employ a Bayesian nonparametric model for the dynamics (1a). This provides a flexible model
that is not constrained by any limiting assumptions caused by postulating a particular functional
form. More specifically, we place a Gaussian process (GP) prior [1] over the unknown function f .
The resulting model is a generalization of the standard parametric SSM. The functional form of
the observation model g is assumed to be known, possibly parameterized by a finite dimensional
parameter. This is often a natural assumption, for instance in engineering applications where g
corresponds to a sensor model ? we typically know what the sensors are measuring, at least up to
some unknown parameters. Furthermore, using too flexible models for both f and g can result in
problems of non-identifiability.
We adopt a fully Bayesian approach whereby we find a posterior distribution over all the latent
entities of interest, namely the state transition function f , the hidden state trajectory x0:T , {xi }Ti=0
1
and any hyper-parameter ? of the model. This is in contrast with existing approaches for using GPs
to model SSMs, which tend to model the GP using a finite set of target points, in effect making
the model parametric [2]. Inferring the distribution over the state trajectory p(x0:T | y0:T , u0:T )
is an important problem in itself known as smoothing. We use a tailored particle Markov Chain
Monte Carlo (PMCMC) algorithm [3] to efficiently sample from the smoothing distribution whilst
marginalizing over the state transition function. This contrasts with conventional approaches to
smoothing which require a fixed model of the transition dynamics. Once we have obtained an
approximation of the smoothing distribution, with the dynamics of the model marginalized out,
learning the function f is straightforward since its posterior is available in closed form given the state
trajectory. Our only approximation is that of the sampling algorithm. We report very good mixing
enabled by the use of recently developed PMCMC samplers [4] and the exact marginalization of the
transition dynamics.
There is by now a rich literature on GP-based SSMs. For instance, Deisenroth et al. [5, 6] presented
refined approximation methods for filtering and smoothing for already learned GP dynamics and
measurement functions. In fact, the method proposed in the present paper provides a vital component
needed for these inference methods, namely that of learning the GP model in the first place. Turner
et al. [2] applied the EM algorithm to obtain a maximum likelihood estimate of parametric models
which had the form of GPs where both inputs and outputs were parameters to be optimized. This
type of approach can be traced back to [7] where Ghahramani and Roweis applied EM to learn
models based on radial basis functions. Wang et al. [8] learn a SSM with GPs by finding a MAP
estimate of the latent variables and hyper-parameters. They apply the learning in cases where the
dimension of the observation vector is much higher than that of the latent state in what becomes a
form of dynamic dimensionality reduction. This procedure would have the risk of overfitting in the
common situation where the state-space is high-dimensional and there is significant uncertainty in
the smoothing distribution.
2
Gaussian Process State-Space Model
We describe the generative probabilistic model of the Gaussian process SSM (GP-SSM) represented
in Figure 1b by
f (xt ) ? GP m?x (xt ), k?x (xt , x0t ) ,
(2a)
xt+1 | ft ? N (xt+1 | ft , Q),
yt | xt ? p(yt | xt , ? y ),
(2b)
(2c)
and x0 ? p(x0 ), where we avoid notational clutter by omitting the conditioning on the known inputs
ut . In addition, we put a prior p(?) over the various hyper-parameters ? = {? x , ? y , Q}. Also, note
that the measurement model (2c) and the prior on x0 can take any form since we do not rely on their
properties for efficient inference.
The GP is fully described by its mean function and its covariance function. An interesting property
of the GP-SSM is that any a priori insight into the dynamics of the system can be readily encoded
in the mean function. This is useful, since it is often possible to capture the main properties of
the dynamics, e.g. by using a simple parametric model or a model based on first principles. Such
(a) Standard GP regression
(b) GP-SSM
Figure 1: Graphical models for standard GP regression and the GP-SSM model. The thick horizontal
bars represent sets of fully connected nodes.
2
simple models may be insufficient on their own, but useful together with the GP-SSM, as the GP
is flexible enough to model complex departures from the mean function. If no specific prior model
is available, the linear mean function m(xt ) = xt is a good generic choice. Interestingly, the prior
information encoded in this model will normally be more vague than the prior information encoded
in parametric models. The measurement model (2c) implicitly contains the observation function g
and the distribution of the i.i.d. measurement noise et .
3
Inference over States and Hyper-parameters
Direct learning of the function f in (2a) from input/output data {u0:T ?1 , y0:T } is challenging since
the states x0:T are not observed. Most (if not all) previous approaches attack this problem by reverting to a parametric representation of f which is learned alongside the states. We address this
problem in a fundamentally different way by marginalizing out f , allowing us to respect the nonparametric nature of the model. A challenge with this approach is that marginalization of f will
introduce dependencies across time for the state variables that lead to the loss of the Markovian
structure of the state-process. However, recently developed inference methods, combining sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) allow us to tackle this problem.
We discuss marginalization of f in Section 3.1 and present the inference algorithms in Sections 3.2
and 3.3.
3.1
Marginalizing out the State Transition Function
Targeting the joint posterior distribution of the hyper-parameters, the latent states and the latent function f is problematic due to the strong dependencies between x0:T and f . We therefore marginalize
the dynamical function from the model, and instead target the distribution p(?, x0:T | y1:T ) (recall
that conditioning on u0:T ?1 is implicit). In the MCMC literature, this is referred to as collapsing [9].
Hence, we first need to find an expression for the marginal prior p(?, x0:T ) = p(x0:T | ?)p(?). Focusing on p(x0:T | ?) we note that, although this distribution is not Gaussian, it can be represented
as a product of Gaussians. Omitting the dependence on ? in the notation, we obtain
p(x1:T | ?, x0 ) =
T
Y
p(xt | ?, x0:t?1 ) =
T
Y
N xt | ?t (x0:t?1 ), ?t (x0:t?1 ) ,
(3a)
t=1
t=1
with
e ?1 (x1:t?1 ? m0:t?2 ),
?t (x0:t?1 ) = mt?1 + Kt?1,0:t?2 K
0:t?2
?1
e
e
?t (x0:t?1 ) = Kt?1 ? Kt?1,0:t?2 K
K>
0:t?2
t?1,0:t?2
(3b)
(3c)
e 0 . Equation (3) follows from the fact that, once
for t ? 2 and ?1 (x0 ) = m0 , ?1 (x0 ) = K
conditioned on x0:t?1 , a one-step prediction for the state variable is a standard GP prediction. Here,
>
we have defined the mean vector m0:t?1 , m(x0 )> . . . m(xt?1 )> and the (nx t) ? (nx t)
positive definite matrix K0:t?1 with block entries [K0:t?1 ]i,j = k(xi?1 , xj?1 ). We use two sets of
e 0:t?1 =
indices, as in Kt?1,0:t?2 , to refer to the off-diagonal blocks of K0:t?1 . We also define K
K0:t?1 + It ? Q. We can also express (3a) more succinctly as,
e ?1 (x1:t ? m0:t?1 )). (4)
e 0:t?1 |? 12 exp(? 1 (x1:t ? m0:t?1 )> K
p(x1:t | ?, x0 ) = |(2?)nx t K
0:t?1
2
This expression looks very much like a multivariate Gaussian density function. However, we eme 0:t?1 depend (nonlinearly) on the argument
phasize that this is not the case since both m0:t?1 and K
x1:t . In fact, (4) will typically be very far from Gaussian.
3.2
Sequential Monte Carlo
With the prior (4) in place, we now turn to posterior inference and we start by considering the joint
smoothing distribution p(x0:T | ?, y0:T ). The sequential nature of the proposed model suggests
the use of SMC. Though most well known for filtering in Markovian SSMs ? see [10, 11] for an
introduction ? SMC is applicable also for non-Markovian latent variable models. We seek to api
proximate the sequence of distributions p(x0:t | ?, y0:t ), for t = 0, . . . , T . Let {xi0:t?1 , wt?1
}N
i=1
3
be a collection of weighted particles approximating p(x0:t?1 | ?, y0:t?1 ) by the empirical distribuPN
i
tion, pb(x0:t?1 | ?, y0:t?1 ) , i=1 wt?1
?xi0:t?1 (x0:t?1 ). Here, ?z (x) is a point-mass located at
z. To propagate this sample to time t, we introduce the auxiliary variables {ait }N
i=1 , referred to as
ancestor indices. The variable ait is the index of the ancestor particle at time t ? 1, of particle xit .
j
Hence, xit is generated by first sampling ait with P(ait = j) = wt?1
. Then, xit is generated as,
ai
t
xit ? p(xt | ?, x0:t?1
, y0:t ),
(5)
ait
for i = 1, . . . , N . The particle trajectories are then augmented according to xi0:t = {x0:t?1 , xit }.
Sampling from the one-step predictive density is a simple (and sensible) choice, but we may also
consider other proposal distributions. In the above formulation the resampling step is implicit and
corresponds to sampling the ancestor indices (cf. the auxiliary particle filter, [12]). Finally, the
particles are weighted according to the measurement model, wti ? p(yt | ?, xit ) for i = 1, . . . , N ,
where the weights are normalized to sum to 1.
3.3
Particle Markov Chain Monte Carlo
There are two shortcomings of SMC: (i) it does not handle inference over hyper-parameters; (ii)
despite the fact that the sampler targets the joint smoothing distribution, it does in general not provide an accurate approximation of the full joint distribution due to path degeneracy. That is, the
successive resampling steps cause the particle diversity to be very low for time points t far from the
final time instant T .
To address these issues, we propose to use a particle Markov chain Monte Carlo (PMCMC, [3, 13])
sampler. PMCMC relies on SMC to generate samples of the highly correlated state trajectory within
an MCMC sampler. We employ a specific PMCMC sampler referred to as particle Gibbs with
ancestor sampling (PGAS, [4]), given in Algorithm 1. PGAS uses Gibbs-like steps for the state
trajectory x0:T and the hyper-parameters ?, respectively. That is, we sample first x0:T given ?,
then ? given x0:T , etc. However, the full conditionals are not explicitly available. Instead, we draw
samples from specially tailored Markov kernels, leaving these conditionals invariant. We address
these steps in the subsequent sections.
Algorithm 1 Particle Gibbs with ancestor sampling (PGAS)
1. Set ?[0] and x1:T [0] arbitrarily.
2. For ` ? 1 do
(a) Draw ?[`] conditionally on x0:T [` ? 1] and y0:T as discussed in Section 3.3.2.
(b) Run CPF-AS (see [4]) targeting p(x0:T | ?[`], y0:T ), conditionally on x0:T [` ? 1].
(c) Sample k with P(k = i) = wTi and set x1:T [`] = xk1:T .
3. end
3.3.1
Sampling the State Trajectories
To sample the state trajectory, PGAS makes use of an SMC-like procedure referred to as a conditional particle filter with ancestor sampling (CPF-AS). This approach is particularly suitable for
non-Markovian latent variable models, as it relies only on a forward recursion (see [4]). The difference between a standard particle filter (PF) and the CPF-AS is that, for the latter, one particle at each
e0:T = {e
eT }. We then samtime step is specified a priori. Let these particles be denoted x
x0 , . . . , x
et .
ple according to (5) only for i = 1, . . . , N ? 1. The N th particle is set deterministically: xN
t =x
To be able to construct the N th particle trajectory, xN
t has to be associated with an ancestor particle
at time t ? 1. This is done by sampling a value for the corresponding ancestor index aN
t . Following
[4], the ancestor sampling probabilities are computed as
i
i
e t?1|T
w
? wt?1
et:T }, y0:T )
et:T })
p({xi0:t?1 , x
p({xi0:t?1 , x
i
i
? wt?1
= wt?1
p(e
xt:T | xi0:t?1 ). (6)
i
p(x0:t?1 , y0:t?1 )
p(xi0:t?1 )
where the ratio is between the unnormalized target densities up to time T and up to time t ? 1,
respectively. The second proportionality follows from the mutual conditional independence of the
et:T } refers to a path in XT +1 formed by concatenating
observations, given the states. Here, {xi0:t?1 , x
4
the two partial trajectories. The above expression can be computed by using the prior over state
i
e t?1|T
trajectories given by (4). The ancestor sampling weights {w
}N
i=1 are then normalized to sum
j
N
to 1 and the ancestor index aN
t is sampled with P(at = j) = wt?1|t .
The conditioning on a prespecified collection of particles implies an invariance property in CPF-AS,
e0:T let x
e00:T be generated as follows:
which is key to our development. More precisely, given x
e0:T .
1. Run CPF-AS from time t = 0 to time t = T , conditionally on x
e00:T to one of the resulting particle trajectories according to P(e
2. Set x
x00:T = xi0:T ) = wTi .
e0:T ) on XT +1 ,
For any N ? 2, this procedure defines an ergodic Markov kernel M?N (e
x00:T | x
leaving the exact smoothing distribution p(x0:T | ?, y0:T ) invariant [4]. Note that this invariance
holds for any N ? 2, i.e. the number of particles that are used only affect the mixing rate of the
kernel M?N . However, it has been experienced in practice that the autocorrelation drops sharply as
N increases [4, 14], and for many models a moderate N is enough to obtain a rapidly mixing kernel.
3.3.2
Sampling the Hyper-parameters
Next, we consider sampling the hyper-parameters given a state trajectory and sequence of observations, i.e. from p(? | x0:T , y0:T ). In the following, we consider the common situation where there
are distinct hyper-parameters for the likelihood p(y0:T | x0:T , ? y ) and for the prior over trajectories
p(x0:T | ? x ). If the prior over the hyper-parameters factorizes between those two groups we obtain
p(? | x0:T , y0:T ) ? p(? y | x0:T , y0:T ) p(? x | x0:T ). We can thus proceed to sample the two
groups of hyper-parameters independently. Sampling ? y will be straightforward in most cases, in
particular if conjugate priors for the likelihood are used. Sampling ? x will, nevertheless, be harder
since the covariance function hyper-parameters enter the expression in a non-trivial way. However,
we note that once the state trajectory is fixed, we are left with a problem analogous to Gaussian
process regression where x0:T ?1 are the inputs, x1:T are the outputs and Q is the likelihood covariance matrix. Given that the latent dynamics can be marginalized out analytically, sampling the
hyper-parameters with slice sampling is straightforward [15].
4
A Sparse GP-SSM Construction and Implementation Details
A naive implementation of the CPF-AS algorithm will give rise to O(T 4 ) computational complexity,
since at each time step t = 1, . . . , T , a matrix of size T ? T needs to be factorized. However, it is
possible to update and reuse the factors from the previous time step, bringing the total computational
complexity down to the familiar O(T 3 ). Furthermore, by introducing a sparse GP model, we can
reduce the complexity to O(M 2 T ) where M T . In Section 4.1 we introduce the sparse GP
model and in Section 4.2 we provide insight into the efficient implementation of both the vanilla GP
and the sparse GP.
4.1
FIC Prior over the State Trajectory
An important alternative to GP-SSM is given by exchanging the vanilla GP prior over f for a sparse
counterpart. We do not consider the resulting model to be an approximation to GP-SSM, it is still a
GP-SSM, but with a different prior over functions. As a result we expect it to sometimes outperform
its non-sparse version in the same way as it happens with their regression siblings [16].
Most sparse GP methods can be formulated in terms of a set of so called inducing variables [17].
These variables live in the space of the latent function and have a set I of corresponding inducing
inputs. The assumption is that, conditionally on the inducing variables, the latent function values are
mutually independent. Although the inducing variables are marginalized analytically ? this is key for
the model to remain nonparametric ? the inducing inputs have to be chosen in such a way that they,
informally speaking, cover the same region of the input space covered by the data. Crucially, in order
to achieve computational gains, the number M of inducing variables is selected to be smaller than
the original number of data points. In the following, we will use the fully independent conditional
(FIC) sparse GP prior as defined in [17] due to its very good empirical performance [16].
As shown in [17], the FIC prior can be obtained by replacing the covariance function k(?, ?) by,
k FIC (xi , xj ) = s(xi , xj ) + ?ij k(xi , xj ) ? s(xi , xj ) ,
(7)
5
where s(xi , xj ) , k(xi , I)k(I, I)?1 k(I, xj ), ?ij is Kronecker?s delta and we use the convention
whereby when k takes a set as one of its arguments it generates a matrix of covariances. Using the
Woodbury matrix identity, we can express the one-step predictive density as in (3), with
?FIC
(x0:t?1 ) = mt?1 + Kt?1,I PKI,0:t?2 ??1
t
0:t?2 (x1:t?1 ? m0:t?2 ),
e t?1 ? St?1 + Kt?1,I PKI,t?1 ,
?FIC (x0:t?1 ) = K
t
(8a)
(8b)
?1
KI,0:t?2 ??1
,
0:t?2 K0:t?2,I )
e 0:t?2 ? S0:t?2 ] and SA,B ,
where P , (KI,I +
?0:t?2 , diag[K
?1
KA,I KI,I KI,B . Despite its apparent cumbersomeness, the computational complexity involved in
computing the above mean and covariance is O(M 2 t), as opposed to O(t3 ) for (3). The same idea
can be used to express (4) in a form which allows for efficient computation. Here diag refers to a
block diagonalization if Q is not diagonal.
We do not address the problem of choosing the inducing inputs, but note that one option is to use
greedy methods (e.g. [18]). The fast forward selection algorithm is appealing due to its very low
computational complexity [18]. Moreover, its potential drawback of interference between hyperparameter learning and active set selection is not an issue in our case since hyper-parameters will be
fixed for a given run of the particle filter.
4.2
Implementation Details
As pointed out above, it is crucial to reuse computations across time to attain the O(T 3 ) or O(M 2 T )
computational complexity for the vanilla GP and the FIC prior, respectively. We start by discussing
the vanilla GP and then briefly comment on the implementation aspects of FIC.
There are two costly operations of the CPF-AS algorithm: (i) sampling from the prior (5), requiring
the computation of (3b) and (3c) and (ii) evaluating the ancestor sampling probabilities according
to (6). Both of these operations can be carried out efficiently by keeping track of a Cholesky faci
e
et:T ?1 }) = Lit Li>
torization of the matrix K({x
t , for each particle i = 1, . . . , N . Here,
0:t?1 , x
i
e
e 0:T ?1 , but where the covariance function
et:T ?1 }) is a matrix defined analogously to K
K({x0:t?1 , x
i
et:T ?1 }. From Lit , it is possible to identify
is evaluated for the concatenated state trajectory {x0:t?1 , x
sub-matrices corresponding to the Cholesky factors for the covariance matrix ?t (xi0:t?1 ) as well as
for the matrices needed to efficiently evaluate the ancestor sampling probabilities (6).
It remains to find an efficient update of the Cholesky factor to obtain Lit+1 . As we move from time
et will be replaced by xit in the concatenated trajectory. Hence, the
t to t + 1 in the algorithm, x
i
i
e
e
et+1:T ?1 }) can be obtained from K({x
et:T ?1 }) by replacing nx rows and
matrix K({x0:t , x
0:t?1 , x
columns, corresponding to a rank 2nx update. It follows that we can compute Lit+1 by making nx
successive rank one updates and downdates on Lit . In summary, all the operations at a specific time
step can be done in O(T 2 ) computations, leading to a total computational complexity of O(T 3 ).
For the FIC prior, a naive implementation will give rise to O(M 2 T 2 ) computational complexity.
This can be reduced to O(M 2 T ) by keeping track of a factorization for the matrix P. However, to
reach the O(M 2 T ) cost all intermediate operations scaling with T has to be avoided, requiring us
to reuse not only the matrix factorizations, but also intermediate matrix-vector multiplications.
5
Learning the Dynamics
Algorithm 1 gives us a tool to compute p(x0:T , ? | y1:T ). We now discuss how this can be used to
find an explicit model for f . The goal of learning the state transition dynamics is equivalent to that
of obtaining a predictive distribution over f ? = f (x? ), evaluated at an arbitrary test point x? ,
Z
p(f ? | x? , y1:T ) = p(f ? | x? , x0:T , ?) p(x0:T , ? | y1:T ) dx0:T d?.
(9)
Using a sample-based approximation of p(x0:T , ? | y1:T ), this integral can be approximated by
p(f ? | x? , y1:T ) ?
L
L
`=1
`=1
1X
1X
p(f ? | x? , x0:T [`], ?[`]) =
N (f ? | ?` (x? ), ?` (x? )),
L
L
6
(10)
where L is the number of samples and ?` (x? ) and ?` (x? ) follow the expressions for the predictive
distribution in standard GP regression if x0:T ?1 [`] are treated as inputs, x1:T [`] are treated as outputs
and Q is the likelihood covariance matrix. This mixture of Gaussians is an expressive representation
of the predictive density which can, for instance, correctly take into account multimodality arising
from ambiguity in the measurements. Although factorized covariance matrices can be pre-computed,
the overall computational cost will increase linearly with L.The computational cost can be reduced
by thinning the Markov chain using e.g. random sub-sampling or kernel herding [19].
In some situations it could be useful to obtain an approximation from the mixture of Gaussians
consisting in a single GP representation. This is the case in applications such as control or real time
filtering where the cost of evaluating the mixture of Gaussians can be prohibitive. In those cases
one could opt for a pragmatic approach and learn the mapping x? 7? f ? from a cloud of points
{x0:T [`], f0:T [`]}L
`=1 using sparse GP regression. The latent function values f0:T [`] can be easily
sampled from the normally distributed p(f0:T [`] | x0:T [`], ?[`]).
6
6.1
Experiments
Learning a Nonlinear System Benchmark
Consider a system with dynamics given by xt+1 = axt + bxt /(1 + x2t ) + cut + vt , vt ? N (0, q)
and observations given by yt = dx2t + et , et ? N (0, r), with parameters (a, b, c, d, q, r) =
(0.5, 25, 8, 0.05, 10, 1) and a known input ut = cos(1.2(t + 1)). One of the difficulties of this
system is that the smoothing density p(x0:T | y0:T ) is multimodal since no information about the
sign of xt is available in the observations. The system is simulated for T = 200 time steps, using
log-normal priors for the hyper-parameters, and the PGAS sampler is then run for 50 iterations using
N = 20 particles. To illustrate the capability of the GP-SSM to make use of a parametric model as
baseline, we use a mean function with the same parametric form as the true system, but parameters
(a, b, c) = (0.3, 7.5, 0). This function, denoted model B, is manifestly different to the actual state
transition (green vs. black surfaces in Figure 2), also demonstrating the flexibility of the GP-SSM.
Figure 2 (left) shows the samples of x0:T (red). It is apparent that the distribution covers two
alternative state trajectories at particular times (e.g. t = 10). In fact, it is always the case that
this bi-modal distribution covers the two states of opposite signs that could have led to the same
observation (cyan). In Figure 2 (right) we plot samples from the smoothing distribution, where
each circle corresponds to (xt , ut , E[ft ]). Although the parametric model used in the mean function
of the GP (green) is clearly not representative of the true dynamics (black), the samples from the
smoothing distribution accurately portray the underlying system. The smoothness prior embodied
by the GP allows for accurate sampling from the smoothing distribution even when the parametric
model of the dynamics fails to capture important features.
To measure the predictive capability of the learned transition dynamics, we generate a new dataset
consisting of 10 000 time steps and present the RMSE between the predicted value of f (xt , ut )
and the actual one. We compare the results from GP-SSM with the predictions obtained from two
parametric models (one with the true model structure and one linear model) and two known models
(the ground truth model and model B). We also report results for the sparse GP-SSM using an
FIC prior with 40 inducing points. Table 1 summarizes the results, averaged over 10 independent
training and test datasets. We also report the RMSE from the joint smoothing sample to the ground
truth trajectory.
Table 1: RMSE to ground truth values over 10 independent runs.
RMSE
Ground truth model (known parameters)
GP-SSM (proposed, model B mean function)
Sparse GP-SSM (proposed, model B mean function)
Model B (fixed parameters)
Ground truth model, learned parameters
Linear model, learned parameters
7
prediction of
f ? |x?t , u?t , data
?
1.7 ? 0.2
1.8 ? 0.2
7.1 ? 0.0
0.5 ? 0.2
5.5 ? 0.1
smoothing
x0:T |data
2.7 ? 0.5
3.2 ? 0.5
2.7 ? 0.4
13.6 ? 1.1
3.0 ? 0.4
6.0 ? 0.5
20
15
20
Samples
Ground truth
1/2
?(max(yt,0)/d)
15
10
5
f(t)
10
State
0
?5
5
?10
0
?15
?5
?20
1
?10
0.5
?15
0
?20
?0.5
0
10
20
30
40
50
60
Time
?20
?1
u(t)
?15
?10
0
?5
5
10
15
20
x(t)
Figure 2: Left: Smoothing distribution. Right: State transition function (black: actual transition
function, green: mean function (model B) and red: smoothing samples).
16
10
2
14
x 12
10
8
??
x? 0
2
5
1
0
? 0
?5
?1
?2
?2
?10
300
350
300
350
300
350
300
350
Figure 3: One step ahead predictive distribution for each of the states of the cart and pole system.
Black: ground truth. Colored band: one standard deviation from the mixture of Gaussians predictive.
6.2
Learning a Cart and Pole System
We apply our approach to learn a model of a cart and pole system used in reinforcement learning.
The system consists of a cart, with a free-spinning pendulum, rolling on a horizontal track. An
external force is applied to the cart. The system?s dynamics can be described by four states and a
set of nonlinear ordinary differential equations [20]. We learn a GP-SSM based on 100 observations
of the state corrupted with Gaussian noise. Although the training set only explores a small region
of the 4-dimensional state space, we can learn a model of the dynamics which can produce one step
ahead predictions such the ones in Figure 3. We obtain a predictive distribution in the form of a
mixture of Gaussians from which we display the first and second moments. Crucially, the learned
model reports different amounts of uncertainty in different regions of the state-space. For instance,
note the narrower error-bars on some states between t = 320 and t = 350. This is due to the model
being more confident in its predictions in areas that are closer to the training data.
7
Conclusions
We have shown an efficient way to perform fully Bayesian inference and learning in the GP-SSM.
A key contribution is that our approach retains the full nonparametric expressivity of the model.
This is made possible by marginalizing out the state transition function, which results in a nontrivial inference problem that we solve using a tailored PGAS sampler.
A particular characteristic of our approach is that the latent states can be sampled from the smoothing
distribution even when the state transition function is unknown. Assumptions about smoothness
and parsimony of this function embodied by the GP prior suffice to obtain high-quality smoothing
distributions. Once samples from the smoothing distribution are available, they can be used to
describe a posterior over the state transition function. This contrasts with the conventional approach
to inference in dynamical systems where smoothing is performed conditioned on a model of the
state transition dynamics.
8
References
[1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning.
MIT Press, 2006.
[2] R. Turner, M. P. Deisenroth, and C. E. Rasmussen, ?State-space inference and learning with Gaussian
processes,? in 13th International Conference on Artificial Intelligence and Statistics, ser. W&CP, Y. W.
Teh and M. Titterington, Eds., vol. 9, Chia Laguna, Sardinia, Italy, May 13?15 2010, pp. 868?875.
[3] C. Andrieu, A. Doucet, and R. Holenstein, ?Particle Markov chain Monte Carlo methods,? Journal of the
Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 3, pp. 269?342, 2010.
[4] F. Lindsten, M. Jordan, and T. B. Sch?on, ?Ancestor sampling for particle Gibbs,? in Advances in Neural
Information Processing Systems 25, P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, Eds.,
2012, pp. 2600?2608.
[5] M. Deisenroth, R. Turner, M. Huber, U. Hanebeck, and C. Rasmussen, ?Robust filtering and smoothing
with Gaussian processes,? IEEE Transactions on Automatic Control, vol. 57, no. 7, pp. 1865 ?1871, july
2012.
[6] M. Deisenroth and S. Mohamed, ?Expectation Propagation in Gaussian process dynamical systems,? in
Advances in Neural Information Processing Systems 25, P. Bartlett, F. Pereira, C. Burges, L. Bottou, and
K. Weinberger, Eds., 2012, pp. 2618?2626.
[7] Z. Ghahramani and S. Roweis, ?Learning nonlinear dynamical systems using an EM algorithm,? in Advances in Neural Information Processing Systems 11, M. J. Kearns, S. A. Solla, and D. A. Cohn, Eds.
MIT Press, 1999.
[8] J. Wang, D. Fleet, and A. Hertzmann, ?Gaussian process dynamical models,? in Advances in Neural
Information Processing Systems 18, Y. Weiss, B. Sch?olkopf, and J. Platt, Eds. Cambridge, MA: MIT
Press, 2006, pp. 1441?1448.
[9] J. S. Liu, Monte Carlo Strategies in Scientific Computing.
Springer, 2001.
[10] A. Doucet and A. Johansen, ?A tutorial on particle filtering and smoothing: Fifteen years later,? in The
Oxford Handbook of Nonlinear Filtering, D. Crisan and B. Rozovsky, Eds. Oxford University Press,
2011.
[11] F. Gustafsson, ?Particle filter theory and practice with positioning applications,? IEEE Aerospace and
Electronic Systems Magazine, vol. 25, no. 7, pp. 53?82, 2010.
[12] M. K. Pitt and N. Shephard, ?Filtering via simulation: Auxiliary particle filters,? Journal of the American
Statistical Association, vol. 94, no. 446, pp. 590?599, 1999.
[13] F. Lindsten and T. B. Sch?on, ?Backward simulation methods for Monte Carlo statistical inference,? Foundations and Trends in Machine Learning, vol. 6, no. 1, pp. 1?143, 2013.
[14] F. Lindsten and T. B. Sch?on, ?On the use of backward simulation in the particle Gibbs sampler,? in
Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), Kyoto, Japan, Mar. 2012.
[15] D. K. Agarwal and A. E. Gelfand, ?Slice sampling for simulation based fitting of spatial data models,?
Statistics and Computing, vol. 15, no. 1, pp. 61?69, 2005.
[16] E. Snelson and Z. Ghahramani, ?Sparse Gaussian processes using pseudo-inputs,? in Advances in Neural
Information Processing Systems (NIPS), Y. Weiss, B. Sch?olkopf, and J. Platt, Eds., Cambridge, MA, 2006,
pp. 1257?1264.
[17] J. Qui?nonero-Candela and C. E. Rasmussen, ?A unifying view of sparse approximate Gaussian process
regression,? Journal of Machine Learning Research, vol. 6, pp. 1939?1959, 2005.
[18] M. Seeger, C. Williams, and N. Lawrence, ?Fast Forward Selection to Speed Up Sparse Gaussian Process
Regression,? in Artificial Intelligence and Statistics 9, 2003.
[19] Y. Chen, M. Welling, and A. Smola, ?Super-samples from kernel herding,? in Proceedings of the 26th
Conference on Uncertainty in Artificial Intelligence (UAI 2010), P. Gr?unwald and P. Spirtes, Eds. AUAI
Press, 2010.
[20] M. Deisenroth, ?Efficient reinforcement learning using Gaussian processes,? Ph.D. dissertation, Karlsruher Institut f?ur Technologie, 2010.
9
| 5085 |@word briefly:1 version:1 proportionality:1 seek:1 propagate:1 crucially:2 covariance:10 simulation:4 fifteen:1 harder:1 moment:1 reduction:1 liu:2 series:3 contains:1 interestingly:1 existing:1 ka:1 readily:1 subsequent:1 drop:1 plot:1 update:4 resampling:2 v:1 generative:1 selected:1 greedy:1 prohibitive:1 intelligence:3 dissertation:1 prespecified:1 colored:1 provides:2 node:1 uppsala:1 attack:1 ssm:21 successive:2 direct:1 differential:1 gustafsson:1 consists:1 fitting:1 autocorrelation:1 multimodality:1 introduce:3 x0:61 huber:1 actual:3 pf:1 considering:1 becomes:1 notation:1 moreover:1 underlying:1 mass:1 factorized:2 suffice:1 what:2 parsimony:1 developed:2 lindsten:4 whilst:1 titterington:1 unobserved:1 finding:1 pseudo:1 ti:1 auai:1 tackle:1 axt:1 uk:2 control:3 normally:2 ser:1 platt:2 positive:1 engineering:3 laguna:1 api:1 despite:2 oxford:2 path:2 black:4 suggests:1 rnx:1 challenging:1 co:1 factorization:2 smc:6 bi:1 averaged:1 woodbury:1 practice:2 block:3 definite:1 procedure:3 area:2 empirical:2 attain:1 pre:1 radial:1 refers:2 marginalize:2 targeting:2 selection:3 put:1 context:1 risk:1 live:1 conventional:2 map:1 equivalent:1 yt:7 straightforward:3 economics:1 williams:2 independently:1 ergodic:1 insight:2 enabled:1 handle:1 analogous:1 limiting:1 target:4 construction:1 magazine:1 exact:2 carl:1 gps:3 us:1 manifestly:1 trend:1 approximated:1 particularly:1 located:1 cut:1 observed:1 ft:3 cloud:1 wang:2 capture:3 region:3 connected:1 solla:1 complexity:10 hertzmann:1 technologie:1 cam:1 dynamic:23 depend:1 predictive:10 basis:1 vague:1 easily:1 joint:6 multimodal:1 k0:5 e00:2 represented:3 various:1 icassp:1 distinct:1 fast:2 describe:2 shortcoming:1 monte:10 artificial:3 hyper:16 choosing:1 refined:1 apparent:2 encoded:3 gelfand:1 solve:1 statistic:3 gp:43 itself:1 final:1 sequence:2 propose:1 product:1 combining:1 nonero:1 rapidly:1 mixing:3 flexibility:1 achieve:1 roweis:2 inducing:8 olkopf:2 produce:1 illustrate:1 ac:1 ij:2 sa:1 shephard:1 strong:1 auxiliary:3 predicted:1 fredrik:1 implies:1 uu:1 convention:1 thick:1 drawback:1 filter:6 on2:1 enable:1 require:1 generalization:1 opt:1 hold:1 ground:7 normal:1 exp:1 lawrence:1 mapping:1 pitt:1 m0:7 adopt:1 estimation:1 applicable:1 successfully:1 tool:1 weighted:2 mit:3 clearly:1 schon:1 gaussian:20 sensor:2 aim:1 pki:2 always:1 super:1 avoid:1 crisan:1 factorizes:1 xit:7 notational:1 rank:2 likelihood:5 greatly:1 contrast:3 seeger:1 baseline:1 inference:15 typically:2 hidden:1 ancestor:14 pgas:6 issue:2 overall:1 flexible:4 denoted:2 priori:2 development:1 smoothing:25 constrained:1 spatial:1 mutual:1 marginal:1 once:5 construct:1 sampling:24 lit:5 look:1 future:1 report:4 fundamentally:1 primarily:1 employ:2 preserve:1 familiar:1 replaced:1 consisting:2 interest:1 cer54:1 highly:1 mixture:5 chain:7 kt:6 accurate:2 integral:1 closer:1 partial:1 sweden:2 institut:1 circle:1 e0:4 instance:4 column:1 markovian:4 cover:3 measuring:1 retains:1 exchanging:1 ordinary:1 cost:4 pole:3 deviation:1 introducing:1 entry:1 rolling:1 oping:1 too:1 gr:1 dependency:2 corrupted:1 adaptively:1 st:1 density:6 explores:1 confident:1 international:2 probabilistic:1 off:1 together:1 analogously:1 ambiguity:1 opposed:1 possibly:1 collapsing:1 external:2 american:1 leading:1 li:1 japan:1 account:1 potential:1 diversity:1 caused:1 explicitly:1 tion:1 performed:1 later:1 closed:1 candela:1 view:1 pendulum:1 red:2 start:2 option:1 capability:2 identifiability:1 rmse:4 contribution:1 formed:1 characteristic:1 efficiently:3 rozovsky:1 t3:1 identify:1 bayesian:5 identification:1 accurately:1 carlo:10 trajectory:20 herding:2 holenstein:1 reach:1 ed:8 pp:12 involved:1 mohamed:1 associated:1 degeneracy:1 sampled:3 gain:1 dataset:1 popular:1 recall:1 ut:7 dimensionality:1 back:1 thinning:1 focusing:1 higher:1 follow:1 methodology:1 modal:1 wei:2 bxt:1 formulation:1 done:2 though:1 evaluated:2 mar:1 furthermore:2 roger:1 implicit:2 xk1:1 smola:1 horizontal:2 replacing:2 expressive:1 nonlinear:7 cohn:1 propagation:1 defines:1 quality:1 scientific:1 effect:2 omitting:2 normalized:2 requiring:2 true:3 counterpart:1 andrieu:1 analytically:3 hence:3 spirtes:1 conditionally:4 whereby:2 unnormalized:1 cp:1 snelson:1 recently:2 common:2 x0t:1 functional:2 mt:2 conditioning:3 discussed:1 xi0:10 association:1 measurement:8 significant:1 refer:1 cambridge:3 gibbs:5 ai:1 enter:1 smoothness:2 automatic:2 vanilla:4 pointed:1 particle:33 had:1 f0:3 surface:1 etc:1 posterior:5 own:1 multivariate:1 italy:1 moderate:1 arbitrarily:1 discussing:1 vt:4 ssms:5 july:1 u0:3 ii:2 full:4 signal:1 infer:1 kyoto:1 isy:1 positioning:1 chia:1 dept:2 proximate:1 impact:1 prediction:6 regression:8 expectation:1 iteration:1 represent:1 tailored:4 kernel:6 sometimes:1 agarwal:1 proposal:1 addition:1 conditionals:2 leaving:2 crucial:1 sch:6 specially:2 bringing:1 comment:1 cart:5 tend:1 jordan:1 presence:1 intermediate:2 vital:1 enough:2 concerned:1 marginalization:3 xj:7 independence:1 affect:1 wti:3 opposite:1 reduce:2 idea:1 sibling:1 fleet:1 expression:5 bartlett:2 reuse:3 speech:1 proceed:1 cause:1 constitute:1 speaking:1 useful:3 se:2 informally:1 covered:1 amount:1 nonparametric:6 clutter:1 band:1 ph:1 reduced:2 generate:2 outperform:1 problematic:1 tutorial:1 sign:2 delta:1 arising:1 track:3 correctly:1 discrete:1 hyperparameter:1 vol:8 express:3 group:2 key:3 four:1 nevertheless:1 pb:1 traced:1 demonstrating:1 backward:2 eme:1 sum:2 year:1 run:5 parameterized:1 uncertainty:3 place:4 electronic:1 draw:2 summarizes:1 scaling:1 qui:1 ki:4 cyan:1 display:1 nontrivial:1 ahead:2 precisely:1 sharply:1 kronecker:1 encodes:1 generates:1 aspect:2 speed:1 argument:2 according:5 conjugate:1 describes:1 across:2 em:3 y0:17 remain:1 smaller:1 appealing:1 ur:1 making:2 happens:1 invariant:2 interference:1 equation:2 mutually:1 remains:1 discus:2 turn:1 x2t:1 needed:2 know:1 reverting:1 fic:10 end:1 available:6 gaussians:6 operation:4 apply:2 generic:1 alternative:2 weinberger:2 thomas:2 original:1 denotes:2 cf:1 graphical:1 marginalized:3 unifying:1 instant:1 concatenated:2 ghahramani:3 approximating:1 society:1 move:1 already:1 parametric:11 costly:1 dependence:1 strategy:1 diagonal:2 div:1 link:1 simulated:1 entity:1 sensible:1 nx:6 trivial:1 spinning:1 index:6 relationship:1 insufficient:1 ratio:1 rise:2 implementation:6 unknown:3 perform:1 allowing:1 teh:1 observation:10 markov:9 datasets:1 benchmark:1 finite:2 situation:3 y1:6 arbitrary:1 hanebeck:1 namely:2 nonlinearly:1 specified:1 optimized:1 aerospace:1 johansen:1 acoustic:1 learned:6 expressivity:2 nip:1 address:4 able:2 bar:2 alongside:1 dynamical:8 departure:1 challenge:1 green:3 max:1 royal:1 suitable:1 natural:1 rely:1 treated:2 difficulty:1 force:1 recursion:1 turner:3 technology:1 carried:1 portray:1 naive:2 torization:1 embodied:2 prior:25 literature:2 sardinia:1 multiplication:1 marginalizing:4 fully:6 loss:1 expect:1 interesting:1 filtering:7 foundation:1 s0:1 principle:1 row:1 succinctly:1 summary:1 pmcmc:5 keeping:2 free:1 rasmussen:4 allow:1 burges:2 sparse:15 distributed:1 slice:2 dimension:1 xn:2 transition:17 evaluating:2 rich:1 forward:3 collection:2 reinforcement:2 made:1 avoided:1 ple:1 far:2 welling:1 transaction:1 approximate:1 implicitly:1 doucet:2 overfitting:1 active:1 uai:1 handbook:1 assumed:1 xi:8 x00:2 latent:13 table:2 learn:6 nature:2 robust:1 obtaining:1 bottou:2 complex:2 diag:2 main:2 linearly:1 noise:3 ait:5 x1:11 augmented:1 referred:4 representative:1 postulating:1 experienced:1 inferring:1 sub:2 deterministically:1 explicit:1 concatenating:1 fails:1 pereira:2 down:1 xt:24 specific:3 sequential:3 diagonalization:1 conditioned:2 chen:1 led:1 springer:1 corresponds:3 truth:7 relies:2 ma:2 conditional:3 identity:1 formulated:2 goal:1 narrower:1 specifically:1 sampler:9 acting:1 wt:7 kearns:1 total:2 called:1 invariance:2 unwald:1 pragmatic:1 deisenroth:5 cholesky:3 latter:1 dx0:1 evaluate:1 mcmc:4 phenomenon:1 correlated:1 |
4,515 | 5,086 | Multi-Task Bayesian Optimization
Kevin Swersky
Department of Computer Science
University of Toronto
[email protected]
Jasper Snoek?
School of Engineering and Applied Sciences
Harvard University
[email protected]
Ryan P. Adams
School of Engineering and Applied Sciences
Harvard University
[email protected]
Abstract
Bayesian optimization has recently been proposed as a framework for automatically tuning the hyperparameters of machine learning models and has been shown
to yield state-of-the-art performance with impressive ease and efficiency. In this
paper, we explore whether it is possible to transfer the knowledge gained from
previous optimizations to new tasks in order to find optimal hyperparameter settings more efficiently. Our approach is based on extending multi-task Gaussian
processes to the framework of Bayesian optimization. We show that this method
significantly speeds up the optimization process when compared to the standard
single-task approach. We further propose a straightforward extension of our algorithm in order to jointly minimize the average error across multiple tasks and
demonstrate how this can be used to greatly speed up k-fold cross-validation.
Lastly, we propose an adaptation of a recently developed acquisition function, entropy search, to the cost-sensitive, multi-task setting. We demonstrate the utility
of this new acquisition function by leveraging a small dataset to explore hyperparameter settings for a large dataset. Our algorithm dynamically chooses which
dataset to query in order to yield the most information per unit cost.
1
Introduction
The proper setting of high-level hyperparameters in machine learning algorithms ? regularization
weights, learning rates, etc. ? is crucial for successful generalization. The difference between poor
settings and good settings of hyperparameters can be the difference between a useless model and
state-of-the-art performance. Surprisingly, hyperparameters are often treated as secondary considerations and are not set in a documented and repeatable way. As the field matures, machine learning
models are becoming more complex, leading to an increase in the number of hyperparameters, which
often interact with each other in non-trivial ways. As the space of hyperparameters grows, the task of
tuning them can become daunting, as well-established techniques such as grid search either become
too slow, or too coarse, leading to poor results in both performance and training time.
Recent work in machine learning has revisited the idea of Bayesian optimization [1, 2, 3, 4, 5, 6, 7],
a framework for global optimization that provides an appealing approach to the difficult explorationexploitation tradeoff. These techniques have been shown to obtain excellent performance on a variety of models, while remaining efficient in terms of the number of required function evaluations,
corresponding to the number of times a model needs to be trained.
One issue with Bayesian optimization is the so-called ?cold start? problem. The optimization must
be carried out from scratch each time a model is applied to new data. If a model will be applied to
?
Research was performed while at the University of Toronto.
1
many different datasets, or even just a few extremely large datasets, then there may be a significant
overhead to re-exploring the same hyperparameter space. Machine learning researchers are often
faced with this problem, and one appealing solution is to transfer knowledge from one domain to the
next. This could manifest itself in many ways, including establishing the values for a grid search, or
simply taking certain hyperparameters as fixed with some commonly accepted value. Indeed, it is
this knowledge that often separates an expert machine learning practitioner from a novice.
The question that this paper explores is whether we can incorporate the same kind of transfer of
knowledge within the Bayesian optimization framework. Such a tool would allow researchers and
practitioners to leverage previously trained models in order to quickly tune new ones. Furthermore,
for large datasets one could imagine exploring a wide range of hyperparameters on a small subset of
data, and then using this knowledge to quickly find an effective setting on the full dataset with just a
few function evaluations.
In this paper, we propose multi-task Bayesian optimization to solve this problem. The basis for
the idea is to apply well-studied multi-task Gaussian process models to the Bayesian optimization
framework. By treating new domains as new tasks, we can adaptively learn the degree of correlation
between domains and use this information to hone the search algorithm. We demonstrate the utility
of this approach in a number of different settings: using prior optimization runs to bootstrap new
runs; optimizing multiple tasks simultaneously when the goal is maximizing average performance;
and utilizing a small version of a dataset to explore hyperparameter settings for the full dataset. Our
approach is fully automatic, requires minimal human intervention and yields substantial improvements in terms of the speed of optimization.
2
2.1
Background
Gaussian Processes
Gaussian processes (GPs) [8] are a flexible class of models for specifying prior distributions over functions f : X ? R. They are defined by the property that any finite set of N
N
points X = {xn ? X }n=1 induces a Gaussian distribution on RN . The convenient properties of the
Gaussian distribution allow us to compute marginal and conditional means and variances in closed
form. GPs are specified by a mean function m : X ? R and a positive definite covariance, or kernel function K : X ? X ? R. The predictive mean and covariance under a GP can be respectively
expressed as:
?(x ; {xn , yn }, ?) = K(X, x)> K(X, X)?1 (y ? m(X)),
0
0
>
?(x, x ; {xn , yn }, ?) = K(x, x ) ? K(X, x) K(X, X)
?1
(1)
0
K(X, x ).
(2)
Here K(X, x) is the N -dimensional column vector of cross-covariances between x and the set X.
The N ? N matrix K(X, X) is the Gram matrix for the set X. As in [6] we use the Mat?ern 5/2
kernel and we marginalize over kernel parameters ? using slice sampling [9].
2.2
Multi-Task Gaussian Processes
In the field of geostatistics [10, 11], and more recently in the field of machine learning [12, 13, 14],
Gaussian processes have been extended to the case of vector-valued functions, i.e., f : X ? RT .
We can interpret the T outputs of such functions as belonging to different regression tasks. The
key to modeling such functions with Gaussian processes is to define a useful covariance function
K((x, t), (x0 , t0 )) between input-task pairs. One simple approach is called the intrinsic model of
coregionalization [12, 11, 13], which transforms a latent function to produce each output. Formally,
Kmulti ((x, t), (x0 , t0 )) = Kt (t, t0 ) ? Kx (x, x0 ),
(3)
where ? denotes the Kronecker product, Kx measures the relationship between inputs, and Kt
measures the relationship between tasks. Given Kmulti , this is simply a standard GP. Therefore, the
complexity still grows cubically in the total number of observations.
Along with the other kernel parameters, we infer the parameters of Kt using slice sampling. Specifically, we represent Kt by its Cholesky factor and sample in that space. For our purposes, it is
reasonable to assume a positive correlation between tasks. We found that sampling each element of
the Cholesky in log space and then exponentiating adequately satisfied this constraint.
2
2.3
Bayesian Optimization for a Single Task
Bayesian optimization is a general framework for the global optimization of noisy, expensive, blackbox functions [15]. The strategy is based on the notion that one can use a relatively cheap probabilistic model to query as a surrogate for the financially, computationally or physically expensive
function that is subject to the optimization. Bayes? rule is used to derive the posterior estimate of the
true function given observations, and the surrogate is then used to determine the next most promising
point to query. A common approach is to use a GP to define a distribution over objective functions
from the input space to a loss that one wishes to minimize. That is, given observation pairs of the
form {xn , yn }N
n=1 , where xn ? X and yn ? R, we assume that the function f (x) is drawn from a
Gaussian process prior where yn ? N (f (xn ), ?) and ? is the function observation noise variance.
A standard approach is to select the next point to query by finding the maximum of an acquisition
function a(x ; {xn , yn }, ?) over a bounded domain in X . This is an heuristic function that uses the
posterior mean and uncertainty, conditioned on the GP hyperparameters ?, in order to balance exploration and exploitation. There have been many proposals for acquisition functions, or combinations
thereof [16, 2]. We will use the expected improvement criterion (EI) [15, 17],
p
aEI (x ; {xn , yn }, ?) = ?(x, x ; {xn , yn }, ?) (?(x) ?(?(x)) + N (?(x) ; 0, 1)) ,
(4)
ybest ? ?(x ; {xn , yn }, ?)
?(x) = p
.
(5)
?(x, x ; {xn , yn }, ?)
Where ?(?) is the cumulative distribution function of the standard normal, and ?(x) is a Z-score.
Due to its simple form, EI can be locally optimized using standard black-box optimization algorithms [6].
An alternative to heuristic acquisition functions such as EI is to consider a distribution over the
minimum of the function and iteratively evaluating points that will most decrease the entropy of
this distribution. This entropy search strategy [18] has the appealing interpretation of decreasing
the uncertainty over the location of the minimum at each optimization step. Here, we formulate the
entropy search problem as that of selecting the next point from a pre-specified candidate set. Given a
? ? X , we can write the probability of a point x ? X
? having the minimum function
set of C points X
?
value among the points in X via:
Z
Y
N
?
Pr(min at x | ?, X, {xn , yn }n=1 ) =
p(f | x, ?, {xn , yn }N
h (f (?
x) ? f (x)) df , (6)
n=1 )
RC
?
x
??X\x
? and h is the Heaviside step function.
where f is the vector of function values at the points X
The entropy search procedure relies on an estimate of the reduction in uncertainty over this
? {xn , yn }N
distribution if the value y at x is revealed. Writing Pr(min at x | ?, X,
n=1 ) as Pmin ,
p(f | x, ?, {xn , yn }N
)
as
p(f
|
x)
and
the
GP
likelihood
function
as
p(y
| f) for brevity, and usn=1
ing H(P) to denote the entropy of P, the objective is to find the point x from a set of candidates
which maximizes the information gain over the distribution of the location of the minimum,
Z Z
aKL (x) =
[H(Pmin ) ? H(Pymin )] p(y | f) p(f | x) dy df,
(7)
where Pymin indicates that the fantasized observation {x, y} has been added to the observation set.
Although (7) does not have a simple form, we can use Monte Carlo to approximate it by sampling f.
An alternative to this formulation is to consider the reduction in entropy relative to a uniform base
distribution, however we found that the formulation given by Equation (7) works better in practice.
3
3.1
Multi-Task Bayesian Optimization
Transferring Bayesian Optimization to a New Task
Under the framework of multi-task GPs, performing optimization on a related task is fairly straightforward. We simply restrict our future observations to the task of interest and proceed as normal.
Once we have enough observations on the task of interest to properly estimate Kt , then the other
tasks will act as additional observations without requiring any additional function evaluations. An
illustration of a multi-task GP versus a single-task GP and its effect on EI is given in Figure 1. This
approach can be thought of as a special case of contextual Gaussian process bandits [19].
3
(1)
(2)
(3)
(a) Multi-task GP sample functions
(b) Independent GP predictions
(c) Multi-task GP predictions
Figure 1: (a) A sample function with three tasks from a multi-task GP. Tasks 2 and 3 are correlated, 1 and 3 are
anti-correlated, and 1 and 2 are uncorrelated. (b) independent and (c) multi-task predictions on the third task.
The dots represent observations, while the dashed line represents the predictive mean. Here we show a function
over three tasks and corresponding observations, where the goal is to minimize the function over the third task.
The curve shown on the bottom represents the expected improvement for each input location on this task. The
independent GP fails to adequately represent the function and optimizing EI leads to a spurious evaluation. The
multi-task GP utilizes the other tasks and the maximal EI point corresponds to the true minimum.
3.2
Optimizing an Average Function over Multiple Tasks
Here we will consider optimizing the average function over multiple tasks. This has elements of
both single and multi-task settings since we have a single objective representing a joint function
over multiple tasks. We motivate this approach by considering a finer-grained version of Bayesian
optimization over k-fold cross validation. We wish to optimize the average performance over all k
folds, but it may not be necessary to actually evaluate all of them in order to identify the quality of
the hyperparameters under consideration. The predictive mean and variance of the average objective
are given by:
?
?(x) =
k
1X
?(x, t ; {xn , yn }, ?),
k t=1
?
? (x)2 =
k
k
1 XX
?(x, x, t, t0 ; {xn , yn }, ?).
k 2 t=1 0
(8)
t =1
If we are willing to spend one function evaluation on each task for every point x that we query,
then the optimization of this objective can proceed using standard approaches. In many situations
though, this can be expensive and perhaps even wasteful. As an extreme case, if we have two
perfectly correlated tasks then spending two function evaluations per query provides no additional
information, at twice the cost of a single-task optimization. The more interesting case then is to try
to jointly choose both x as well as the task t and spend only one function evaluation per query.
We choose a (x, t) pair using a two-step heuristic. First we impute missing observations using the
predictive means. We then use the estimated average function to pick a promising candidate x by
optimizing EI. Conditioned on x, we then choose the task that yields the highest single-task expected
improvement.
The problem of minimizing the average error over multiple tasks has been considered in [20], where
they applied Bayesian optimization in order to tune a single model on multiple datasets. Their
approach is to project each function to a joint latent space and then iteratively visit each dataset
in turn. Another approach can be found in [3], where additional task-specific features are used in
conjunction with the inputs x to make predictions about each task.
3.3
A Principled Multi-Task Acquisition Function
Rather than transferring knowledge from an already completed search on a related task to bootstrap
a new one, a more desirable strategy would have the optimization routine dynamically query the
related, possibly significantly cheaper task. Intuitively, if two tasks are closely related, then evaluating a cheaper one can reveal information and reduce uncertainty about the location of the minimum
on the more expensive task. A clever strategy may, for example, perform low cost exploration of a
promising location on the cheaper task before risking an evaluation of the expensive task. In this
section we develop an acquisition function for such a dynamic multi-task strategy which specifically
takes noisy estimates of cost into account based on the entropy search strategy.
Although the EI criterion is intuitive and effective in the single task case, it does not directly generalize to the multi-task case. However, entropy search does translate naturally to the multi-task
problem. In this setting we have observation pairs from multiple tasks, {xtn , ynt }N
n=1 and we wish
4
Task Functions
Information Gain
(a) Uncorrelated functions
(b) Correlated functions
(c) Correlated functions scaled by cost
Figure 2: A visualization of the multi-task information gain per unit cost acquisition function. In each figure,
the objective is to find the minimum of the solid blue function. The green function is an auxiliary objective
function. In the bottom of each figure are lines indicating the expected information gain with regard to the
primary objective function. The green dashed line shows the information gain about the primary objective that
results from evaluating the auxiliary objective function. Figure 2a shows two sampled functions from a GP that
are uncorrelated. Evaluating the primary objective gains information, but evaluating the auxiliary does not. In
Figure 2b we see that with two strongly correlated functions, not only do observations on either task reduce
uncertainty about the other, but observations from the auxiliary task acquire information about the primary task.
Finally, in 2c we assume that the primary objective is three times more expensive than the auxiliary task and
thus evaluating the related task gives more information gain per unit cost.
to pick the candidate xt that maximally reduces the entropy of Pmin for the primary task, which we
take to be t = 1. Naturally, Pmin evaluates to zero for xt>1 . However, we can evaluate Pymin for
y t>1 and if the auxiliary task is related to the primary task, Pymin will change from the base distribution and H(Pmin ) ? H(Pymin ) will be positive. Through reducing uncertainty about f, evaluating an
observation on a related auxiliary task can reduce the entropy of Pmin on the primary task of interest.
However, observe that evaluating a point on a related task can never reveal more information than
evaluating the same point on the task of interest. Thus, the above strategy would never choose to
evaluate a related task. Nevertheless, when cost is taken into account, the auxiliary task may convey
more information per unit cost. Thus we translate the objective from Equation (7) to instead reflect
the information gain per unit cost of evaluating a candidate point,
Z Z
H[Pmin ] ? H[Pymin ]
aIG (xt ) =
p(y | f) p(f | xt ) dy df,
(9)
ct (x)
where ct (x), ct : X ? R+ , is the real valued cost of evaluating task t at x. Although, we don?t
know this cost function in advance, we can estimate it similarly to the task functions, f (xt ), using
the same multi-task GP machinery to model log ct (x).
Figure 2 provides a visualization of this acquisition function, using a two task example. It shows
how selecting a point on a related auxiliary task can reduce uncertainty about the location of the
minimum on the primary task of interest (blue solid line). In this paper, we assume that all the
candidate points for which we compute aIG come from a fixed subset. Following [18], we pick these
candidates by taking the top C points according to the EI criterion on the primary task of interest.
4
4.1
Empirical Analyses
Addressing the Cold Start Problem
Here we compare Bayesian optimization with no initial information to the case where we can leverage results from an already completed optimization on a related task. In each classification experiment the target of Bayesian optimization is the error on a held out validation set. Further details on
these experiments can be found in the supplementary material.
Branin-Hoo The Branin-Hoo function is a common benchmark for optimization techniques [17]
that is defined over a bounded set on R2 . As a related task we consider a shifted Branin-Hoo where
the function is translated by 10% along either axis. We used Bayesian optimization to find the
minimum of the original function and then added the shifted function as an additional task.
Logistic regression We optimize four hyperparameters of logistic regression (LR) on the MNIST
dataset using 10000 validation examples. We assume that we have already completed 50 iterations
5
15
10
5
0
0
5
10
15
20
Function evaluations
25
30
0.06
0.055
0.05
0
0.25
0.2
0.15
0.1
0
10
20
30
Function evaluations
40
10
20
30
Function Evaluations
40
0.245
0.24
0.235
0.23
50
0
50
0.35
0.3
0.2
0.15
0.1
10
20
30
Function Evaluations
20
30
Function evaluations
40
50
(c) CNN on STL-10
0.25
0
10
0.6
SVHN from scratch
SVHN from CIFAR?10
SVHN from subset of SVHN
0.4
STL?10 from scratch
STL?10 from CIFAR?10
Baseline
0.25
(b) CNN on SVHN
MNIST from scratch
MNIST from USPS
Baseline
Average Cumulative Error
Min Function Value (Error Rate)
(a) Shifted Branin-Hoo
0.3
SVHN from scratch
SVHN from CIFAR?10
SVHN from subset of SVHN
Baseline
0.065
Min Function Value (Error Rate)
20
0.07
Average Cumulative Error
Min Function Value
25
Min Function Value (Error Rate)
Branin?Hoo from scratch
Branin?Hoo from shifted
Baseline
30
40
50
STL?10 from scratch
STL?10 from CIFAR?10
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0
10
20
30
Function evaluations
40
50
(d) LR on MNIST
(e) SVHN ACE
(f) STL-10 ACE
Figure 3: (a)-(d)Validation error per function evaluation. (e),(f) ACE over function evaluations.
of an optimization of the same model on the related USPS digits task. The USPS data is only 1/6
the size of MNIST and each image contains 16 ? 16 pixels, so it is considerably cheaper to evaluate.
Convolutional neural networks on pixels We applied convolutional neural networks1 (CNNs)
to the Street View House Numbers (SVHN) [21] dataset and bootstrapped from a previous run of
Bayesian optimization using the same model trained on CIFAR-10 [22, 6]. At the time, this model
represented the state-of-the-art. The SVHN dataset has the same input dimension as CIFAR-10,
but is 10 times larger. We used 6000 held-out examples for validation. Additionally, we consider
training on 1/10th of the SVHN dataset to warm-start the full optimization. The best settings yielded
4.77 ? 0.22% error, which is comparable to domain experts using non-dropout CNNs [23].
Convolutional networks on k-means features As an extension to the previous CNN experiment,
we incorporate a more sophisticated pipeline in order to learn a model for the STL-10 dataset [24].
This dataset consists of images with 96 ? 96 pixels, and each training set has only 1000 images.
Overfitting is a significant challenge for this dataset, so we utilize a CNN2 on top of k-means features
in a similar approach to [25], as well as dropout [26]. We bootstrapped Bayesian optimization using
the same model trained on CIFAR-10, which had achieved 14.2% test error on that dataset. During
the optimization, we used the first fold for training, and the remaining 4000 points from the other
folds for validation. We then trained separate networks on each fold using the best hyperparameter
settings found by Bayesian optimization. Following reporting conventions for this dataset, the model
achieved 70.1 ? 0.6% test-set accuracy, exceeding the previous state-of-the-art of 64.5 ? 1% [27].
The results of these experiments are shown in Figure 3(a)-(d). In each case, the multi-task optimization finds a better function value much more quickly than single-task optimization. Clearly there is
information in the related tasks that can be exploited. To better understand the behaviour of the different methods, we plot the average cumulative error (ACE), i.e., the average of all function values
seen up to a given time, in Figure 3(e),(f). The single-task method wastes many more evaluations
exploring poor hyperparameter settings. In the multi-task case, this exploration has already been
performed and more evaluations are spent on exploitation.
As a baseline (the dashed black line), we took the best model from the first task and applied it
directly to the task of interest. For example, in the CNN experiments this involved taking the best
settings from CIFAR-10. This ?direct transfer? performed well in some cases and poorly in others.
In general, we have found that the best settings for one task are usually not optimal for the other.
4.2
Fast Cross-Validation
k-fold cross-validation is a widely used technique for estimating the generalization error of machine
learning models, but requires retraining a model k times. This can be prohibitively expensive with
complex models and large datasets. It is reasonable to expect, however, that if the data are randomly
1
2
Using the Cuda Convnet package: https://code.google.com/p/cuda-convnet
Using the Deepnet package: https://github.com/nitishsrivastava/deepnet
6
1
0.95
50
100
150
Function evaluations
(a)
200
Min Function Value (RMSE)
Min Function Value (RMSE)
Avg Crossval Error
Estimated Avg Crossval Error
Full Crossvalidation
1.05
Fold 1
Fold 2
Fold 3
Fold 4
Fold 5
1.02
1
0.98
0.96
0.94
0.92
20
40
60
80
Function evaluations
100
Figure 4:
(a) PMF crossvalidation error per function
evaluation on Movielens-100k.
(b) Lowest error observed for
each fold per function evaluation
for a single run.
(b)
partitioned among folds that the errors for each fold will be highly correlated. For a given set of
hyperparameters, we can therefore expect diminishing returns in estimating the average error for
each subsequently evaluated fold. With a good GP model, we can very likely obtain a high quality
estimate by evaluating just one fold per setting. In this experiment, we apply the algorithm described
in Section 3.2 in order to dynamically determine which points/folds to query.
We demonstrate this procedure on the task of training probabilistic matrix factorization (PMF) models for recommender systems [28]. The hyperparameters of the PMF model are the learning rate,
an `2 regularizer, the matrix rank, and the number of epochs. We use 5-fold cross validation on the
Movielens-100k dataset [29]. In Figure 4(a) we show the best error obtained after a given number
of function evaluations as measured by the number of folds queried, averaged over 50 optimization
runs. For the multi-task version, we show both the true average cross-validation error, as well as
the estimated error according to the GP. In the beginning, the GP fit is highly uncertain, so the optimization exhibits some noise. As the GP model becomes more certain however, the true error and
the GP estimate converge and the search proceeds rapidly compared to the single-task counterpart.
In Figure 4(b), we show the best observed error after a given number of function evaluations on a
randomly selected run. For a particular fold, the error cannot improve unless that fold is directly
queried. The algorithm makes nontrivial decisions in terms of which fold to query, steadily reducing
the average error.
4.3
Using Small Datasets to Quickly Optimize for Large Datasets
As a final empirical analysis, we evaluate the dynamic multi-task entropy search strategy developed
in Section 3.3 on two hyperparameter tuning problems. We treat the cost, ct (x), of a function
evaluation as being the real running time of training and evaluating the machine learning algorithm
with hyperparameter settings x on task t. We assume no prior knowledge about either task, their
correlation, or their respective cost, but instead estimate these as the optimization progresses. In
both tasks we compare using our multi-task entropy search strategy (MTBO) to optimizing the task
of interest independently (STBO).
First, we revisit the logistic regression problem from Section 4.1 (Figure 3(d)) using the same experimental protocol, but rather than assuming that there is a completed optimization of the USPS
data, the Bayesian optimization routine can instead dynamically query USPS as needed. Figure 5(a),
shows the average time taken by either strategy to reach the values along the blue line. We see that
MTBO reaches the minimum value on the validation set within 40 minutes, while STBO reaches it
in 100 minutes. Figures 5(b) and 5(c) show that MTBO reaches better values significantly faster by
spending more function evaluations on the related, but relatively cheaper task.
Finally we evaluate the very expensive problem of optimizing the hyperparameters of online Latent
Dirichlet Allocation [30] on a large corpus of 200,000 documents. Snoek et al. [6] demonstrated
that on this problem, Bayesian optimization could find better hyperparameters in significantly less
time than the grid search conducted by the authors. We repeat this experiment here using the exact
same grid as [6] and [30] but provide an auxiliary task involving a subset of 50,000 documents
and 25 topics on the same grid. Each function evaluation on the large corpus took an average
of 5.8 hours to evaluate while the smaller corpus took 2.5 hours. We performed our multi-task
Bayesian optimization restricted to the same grid and compare to the results of the standard Bayesian
optimization of [6] (the GP EI MCMC algorithm). In Figure 5d, we see that our MTBO strategy
finds the minimum in approximately 6 days of computation while the STBO strategy takes 10 days.
Our algorithm saves almost 4 days of computation by being able to dynamically explore the cheaper
alternative task. We see in 5(f) that particularly early in the optimization, the algorithm explores the
cheaper task to gather information about the expensive one.
7
60
40
6.79
0
0
13.53
20
9.49
8.14
7.06
40
60
80
Time Taken by STBO (Mins)
14
12
10
8
6
100
10
10
8
1267
6
1270
4
1271
2
1272
0
0
1489
2
4
6
8
Time Taken by STBO (Days)
30
40
50
60
Time (Minutes)
70
80
10
14
12
10
8
90
10
(d) Online LDA Time Taken
MTBO
STBO
1278
1276
1274
1272
1270
2
4
6
8
Time (Days)
20
30
Function Evaluations
40
50
(c) LR Fn Evaluations
1280
1268
MTBO
STBO
16
(b) LR Time
Min Function Value (Perplexity)
Min Function Value (Perplexity)
Time Taken by MTBO (Days)
(a) LR Time Taken
20
Min Function Value (Perplexity)
20
18
MTBO
STBO
16
Min Function Value (Error %)
80
Min Function Value (Error %)
Time Taken by MTBO (Mins)
18
Min Function Value (Classification Error %)
100
10
(e) Online LDA Time
12
1360
MTBO
STBO
1340
1320
1300
1280
10
20
30
Function Evaluations
40
50
(f) Online LDA Fn Evaluations
Figure 5: (a),(d) Time taken to reach a given validation error. (b),(e) Validation error as a function of time spent
training the models. (c),(f) Validation error over the number of function evaluations.
5
Conclusion
As datasets grow larger, and models become more expensive, it has become necessary to develop new search strategies in order to find optimal hyperparameter settings as quickly as possible.
Bayesian optimization has emerged a powerful framework for guiding this search. What the framework currently lacks, however, is a principled way to leverage prior knowledge gained from searches
over similar domains. There is a plethora of information that can be carried over from related tasks,
and taking advantage of this can result in substantial cost-savings by allowing the search to focus on
regions of the hyperparameter space that are already known to be promising.
In this paper we introduced multi-task Bayesian optimization as a method to address this issue.
We showed how multi-task GPs can be utilized within the existing framework in order to capture
correlation between related tasks. Using this technique, we demonstrated that one can bootstrap
previous searches, resulting in significantly faster optimization.
We further showed how this idea can be extended to solving multiple problems simultaneously. The
first application we considered was the problem of optimizing an average score over several related
tasks, motivated by the problem of k-fold cross-validation. Our fast cross-validation procedure
obviates the need to evaluate each fold per hyperparameter query and therefore eliminates redundant
and costly function evaluations.
The next application we considered employed a cost-sensitive version of the entropy search acquisition function in order to utilize a cheap auxiliary task in the minimization of an expensive primary
task. Our algorithm dynamically chooses which task to evaluate, and we showed that it can substantially reduce the amount of time required to find good hyperparameter settings. This technique
should prove to be useful in tuning sophisticated models on extremely large datasets.
As future work, we would like to extend this framework to multiple architectures. For example,
we might want to train a one-layer neural network on one task, and a two-layer neural network on
another task. This provides another avenue for utilizing one task to bootstrap another.
Acknowledgements
The authors would like to thank Nitish Srivastava for providing help with the Deepnet package,
Robert Gens for providing feature extraction code, and Richard Zemel for helpful discussions.
Jasper Snoek was supported by a grant from Google. This work was funded by DARPA Young
Faculty Award N66001-12-1-4219 and an Amazon AWS in Research grant.
References
[1] Eric Brochu, T Brochu, and Nando de Freitas. A Bayesian interactive optimization approach to procedural
animation design. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2010.
8
[2] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization
in the bandit setting: no regret and experimental design. In ICML, 2010.
[3] Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Learning and Intelligent Optimization 5, 2011.
[4] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In LION, 2009.
[5] James Bergstra, R?emi Bardenet, Yoshua Bengio, and B?al?azs K?egl. Algorithms for hyper-parameter optimization. In NIPS. 2011.
[6] Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning
algorithms. In NIPS, 2012.
[7] James Bergstra, Daniel Yamins, and David Cox. Making a science of model search: hyperparameter
optimization in hundreds of dimensions for vision architectures. In ICML, 2013.
[8] Carl E. Rasmussen and Christopher Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[9] Iain Murray and Ryan P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models.
In NIPS. 2010.
[10] Andre G. Journel and Charles J. Huijbregts. Mining Geostatistics. Academic press London, 1978.
[11] Pierre Goovaerts. Geostatistics for natural resources evaluation. Oxford University Press, 1997.
[12] Matthias Seeger, Yee-Whye Teh, and Michael I. Jordan. Semiparametric latent factor models. In AISTATS,
2005.
[13] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K. I. Williams. Multi-task Gaussian process
prediction. In NIPS, 2008.
[14] Mauricio A Alvarez and Neil D Lawrence. Computationally efficient convolved multiple output Gaussian
processes. Journal of Machine Learning Research, 12, 2011.
[15] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of Bayesian methods for seeking
the extremum. Towards Global Optimization, 2, 1978.
[16] Matthew Hoffman, Eric Brochu, and Nando de Freitas. Portfolio allocation for Bayesian optimization. In
UAI, 2011.
[17] Donald R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of
Global Optimization, 21, 2001.
[18] Philipp Hennig and Christian J. Schuler. Entropy search for information-efficient global optimization.
Journal of Machine Learning Research, 13, 2012.
[19] Andreas Krause and Cheng Soon Ong. Contextual gaussian process bandit optimization. In NIPS, 2011.
[20] R?emi Bardenet, M?aty?as Brendel, Bal?azs K?egl, and Mich`ele Sebag. Collaborative hyperparameter tuning.
In ICML, 2013.
[21] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading
digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and
Unsupervised Feature Learning, 2011.
[22] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Department of
Computer Science, University of Toronto, 2009.
[23] Pierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied to house
numbers digit classification. In ICPR, 2012.
[24] Adam Coates, Honglak Lee, and Andrew Y Ng. An analysis of single-layer networks in unsupervised
feature learning. AISTATS, 2011.
[25] Robert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In NIPS, 2012.
[26] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint, 2012.
[27] Liefeng Bo, Xiaofeng Ren, and Dieter Fox. Unsupervised feature learning for RGB-D based object
recognition. ISER,, 2012.
[28] Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. NIPS, 2008.
[29] Jonathan L. Herlocker, Joseph A. Konstan, Al Borchers, and John Riedl. An algorithmic framework for
performing collaborative filtering. In ACM SIGIR Conference on Research and Development in Information Retrieval, 1999.
[30] Matthew Hoffman, David M. Blei, and Francis Bach. Online learning for latent Dirichlet allocation. In
NIPS, 2010.
9
| 5086 |@word exploitation:2 faculty:1 version:4 cnn:4 cox:1 retraining:1 mockus:1 willing:1 zilinskas:1 rgb:1 covariance:5 pick:3 solid:2 reduction:2 initial:1 configuration:1 contains:1 score:2 selecting:2 daniel:1 bootstrapped:2 document:2 existing:1 freitas:2 contextual:2 com:2 must:1 john:1 fn:2 cheap:2 christian:1 treating:1 plot:1 selected:1 beginning:1 bissacco:1 lr:5 blei:1 coarse:1 provides:4 revisited:1 toronto:4 location:6 philipp:1 rc:1 along:3 branin:6 direct:1 become:4 crossval:2 symposium:1 jonas:1 consists:1 prove:1 overhead:1 x0:3 snoek:4 expected:4 indeed:1 blackbox:1 multi:30 salakhutdinov:2 ming:1 decreasing:1 automatically:1 soumith:1 considering:1 becomes:1 project:1 xx:1 bounded:2 estimating:2 maximizes:1 lowest:1 what:1 kind:1 akl:1 substantially:1 developed:2 finding:1 extremum:1 every:1 act:1 interactive:1 prohibitively:1 scaled:1 unit:5 grant:2 intervention:1 yn:16 mauricio:1 positive:3 before:1 engineering:2 treat:1 oxford:1 establishing:1 becoming:1 approximately:1 black:2 might:1 twice:1 studied:1 dynamically:6 specifying:1 co:1 ease:1 factorization:2 range:1 averaged:1 practical:1 lecun:1 practice:1 regret:1 definite:1 bootstrap:4 digit:3 cold:2 procedure:3 networks1:1 goovaerts:1 empirical:2 significantly:5 thought:1 convenient:1 pre:1 donald:1 cannot:1 marginalize:1 clever:1 writing:1 yee:1 optimize:3 demonstrated:2 missing:1 maximizing:1 straightforward:2 williams:2 independently:1 sigir:1 formulate:1 amazon:1 rule:1 iain:1 utilizing:2 notion:1 imagine:1 target:1 exact:1 gps:4 us:1 carl:1 domingo:1 harvard:4 element:2 expensive:11 particularly:1 utilized:1 recognition:1 bottom:2 observed:2 preprint:1 wang:1 capture:1 region:1 jsnoek:1 decrease:1 highest:1 substantial:2 principled:2 alessandro:1 complexity:1 ong:1 dynamic:2 trained:5 motivate:1 solving:1 predictive:4 efficiency:1 eric:2 basis:1 usps:5 translated:1 edwin:1 joint:2 darpa:1 aei:1 siggraph:1 represented:1 regularizer:1 train:1 fast:2 effective:2 london:1 monte:1 borchers:1 query:12 zemel:1 kevin:2 hyper:1 heuristic:3 spend:2 solve:1 valued:2 supplementary:1 ace:4 larger:2 widely:1 emerged:1 neil:1 gp:21 jointly:2 itself:1 noisy:2 final:1 online:5 advantage:1 matthias:2 took:3 propose:3 product:2 maximal:1 adaptation:2 rapidly:1 gen:2 translate:2 poorly:1 intuitive:1 az:2 crossvalidation:2 chai:1 sutskever:1 extending:1 sea:2 produce:1 plethora:1 adam:5 object:1 spent:2 derive:1 develop:2 help:1 andrew:2 measured:1 school:2 progress:1 auxiliary:11 c:1 come:1 larochelle:1 convention:1 closely:1 cnns:2 subsequently:1 exploration:3 human:1 nando:2 material:1 behaviour:1 generalization:2 ryan:3 extension:2 exploring:3 considered:3 normal:2 lawrence:1 algorithmic:1 matthew:2 early:1 purpose:1 tiesis:1 ruslan:2 currently:1 sensitive:2 tool:1 hoffman:2 minimization:1 mit:1 clearly:1 gaussian:18 rather:2 conjunction:1 focus:1 improvement:4 properly:1 rank:1 likelihood:1 indicates:1 greatly:1 seeger:2 baseline:5 helpful:1 cubically:1 transferring:2 diminishing:1 spurious:1 bandit:3 rpa:1 tao:1 pixel:3 issue:2 among:2 flexible:1 classification:3 development:1 art:4 special:1 fairly:1 marginal:1 field:3 once:1 never:2 having:1 extraction:1 sampling:5 ng:2 saving:1 represents:2 holger:1 jones:1 icml:3 unsupervised:4 future:2 report:1 others:1 intelligent:1 richard:1 few:2 yoshua:1 randomly:2 simultaneously:2 cheaper:7 interest:8 highly:2 mining:1 mnih:1 evaluation:34 extreme:1 held:2 kt:5 necessary:2 netzer:1 respective:1 machinery:1 unless:1 fox:1 pmf:3 re:1 minimal:1 uncertain:1 hutter:1 column:1 modeling:1 cost:17 addressing:1 subset:5 uniform:1 hundred:1 krizhevsky:2 successful:1 conducted:1 kswersky:1 too:2 considerably:1 chooses:2 adaptively:1 explores:2 probabilistic:3 lee:1 michael:1 quickly:5 ilya:1 reflect:1 satisfied:1 eurographics:1 choose:4 possibly:1 expert:2 leading:2 return:1 pmin:7 account:2 de:2 bergstra:2 waste:1 bonilla:1 performed:4 try:1 view:1 closed:1 francis:1 start:3 bayes:1 rmse:2 collaborative:2 minimize:3 brendel:1 accuracy:1 convolutional:4 variance:3 ynt:1 efficiently:1 yield:4 identify:1 generalize:1 bayesian:30 ren:1 carlo:1 researcher:2 finer:1 xtn:1 detector:1 reach:5 andre:1 evaluates:1 acquisition:10 involved:1 steadily:1 thereof:1 james:2 naturally:2 chintala:1 gain:8 sampled:1 dataset:17 manifest:1 knowledge:8 ele:1 routine:2 sophisticated:2 actually:1 brochu:3 matures:1 day:6 response:1 maximally:1 daunting:1 alvarez:1 formulation:2 evaluated:1 box:1 though:1 strongly:1 furthermore:1 just:3 risking:1 lastly:1 correlation:4 ei:10 christopher:2 lack:1 google:2 liefeng:1 logistic:3 quality:2 perhaps:1 reveal:2 lda:3 grows:2 effect:1 vytautas:1 requiring:1 true:4 brown:1 counterpart:1 adequately:2 regularization:1 iteratively:2 impute:1 during:1 criterion:3 bal:1 whye:1 demonstrate:4 svhn:13 spending:2 image:5 consideration:2 recently:3 charles:1 common:2 jasper:3 hugo:1 extend:1 interpretation:1 interpret:1 significant:2 honglak:1 queried:2 tuning:5 automatic:1 grid:6 similarly:1 iser:1 fantasized:1 had:1 dot:1 funded:1 portfolio:1 impressive:1 surface:1 etc:1 base:2 posterior:2 recent:1 showed:3 optimizing:8 perplexity:3 certain:2 exploited:1 seen:1 minimum:11 additional:5 employed:1 determine:2 converge:1 redundant:1 dashed:3 multiple:12 full:4 desirable:1 infer:1 reduces:1 sham:1 ing:1 technical:1 faster:2 academic:1 cross:9 bach:1 cifar:8 retrieval:1 niranjan:1 visit:1 award:1 prediction:5 involving:1 regression:4 vision:1 aty:1 df:3 physically:1 iteration:1 kernel:4 represent:3 arxiv:1 achieved:2 proposal:1 background:1 want:1 krause:2 semiparametric:1 aws:1 grow:1 crucial:1 eliminates:1 subject:1 leveraging:1 jordan:1 practitioner:2 leverage:3 revealed:1 bengio:1 enough:1 variety:1 fit:1 architecture:2 restrict:1 perfectly:1 andriy:1 reduce:5 idea:3 andreas:2 avenue:1 tradeoff:1 t0:4 whether:2 motivated:1 utility:2 proceed:2 deep:1 useful:2 tune:2 transforms:1 amount:1 locally:1 induces:1 documented:1 http:2 kian:1 coates:2 shifted:4 cuda:2 revisit:1 estimated:3 per:12 blue:3 write:1 hyperparameter:14 mat:1 hennig:1 key:1 four:1 procedural:1 nevertheless:1 drawn:1 wasteful:1 bardenet:2 utilize:2 n66001:1 journel:1 sum:1 aig:2 run:6 package:3 uncertainty:7 powerful:1 swersky:1 reporting:1 almost:1 reasonable:2 heaviside:1 wu:1 yann:1 utilizes:1 decision:1 dy:2 comparable:1 dropout:2 layer:4 ct:5 cheng:1 fold:25 yielded:1 nontrivial:1 kronecker:1 constraint:1 alex:2 speed:3 nitish:2 extremely:2 min:16 emi:2 performing:2 relatively:2 ern:1 department:2 according:2 icpr:1 combination:1 poor:3 riedl:1 belonging:1 hoo:6 across:1 smaller:1 partitioned:1 appealing:3 kakade:1 joseph:1 making:1 intuitively:1 restricted:1 pr:2 dieter:1 taken:9 pipeline:1 computationally:2 equation:2 visualization:2 previously:1 resource:1 turn:1 needed:1 know:1 yamins:1 apply:2 observe:1 pierre:2 save:1 alternative:3 convolved:1 original:1 obviates:1 denotes:1 remaining:2 top:2 running:1 completed:4 dirichlet:2 murray:1 seeking:1 objective:13 question:1 added:2 already:5 strategy:13 primary:11 rt:1 costly:1 surrogate:2 financially:1 exhibit:1 convnet:2 separate:2 thank:1 explorationexploitation:1 street:1 topic:1 trivial:1 assuming:1 code:2 useless:1 relationship:2 illustration:1 providing:2 balance:1 minimizing:1 acquire:1 sermanet:1 difficult:1 robert:3 taxonomy:1 frank:1 herlocker:1 design:2 proper:1 perform:1 allowing:1 recommender:1 teh:1 observation:16 datasets:9 hone:1 benchmark:1 finite:1 anti:1 huijbregts:1 situation:1 extended:2 hinton:1 rn:1 introduced:1 david:2 pair:4 required:2 specified:2 optimized:1 deepnet:3 established:1 hour:2 geostatistics:3 nip:9 address:1 able:1 proceeds:1 usually:1 lion:1 reading:1 challenge:1 including:1 green:2 treated:1 warm:1 natural:2 representing:1 improve:1 github:1 axis:1 carried:2 faced:1 prior:5 epoch:1 acknowledgement:1 relative:1 fully:1 loss:1 expect:2 interesting:1 allocation:3 filtering:1 versus:1 geoffrey:1 validation:17 degree:1 gather:1 uncorrelated:3 tiny:1 surprisingly:1 repeat:1 supported:1 rasmussen:1 soon:1 allow:2 understand:1 wide:1 taking:4 slice:3 curve:1 regard:1 xn:17 gram:1 cumulative:4 evaluating:13 dimension:2 preventing:1 author:2 commonly:1 coregionalization:1 exponentiating:1 avg:2 novice:1 approximate:1 global:7 overfitting:1 uai:1 corpus:3 discriminative:1 don:1 search:22 latent:6 mich:1 additionally:1 promising:4 schuler:1 learn:2 transfer:4 cnn2:1 scratch:7 improving:1 interact:1 excellent:1 complex:2 domain:6 protocol:1 garnett:1 aistats:2 noise:2 hyperparameters:16 animation:2 sebag:1 osborne:1 convey:1 mtbo:10 slow:1 fails:1 guiding:1 wish:3 exceeding:1 konstan:1 candidate:7 house:2 third:2 grained:1 young:1 minute:3 xiaofeng:1 specific:1 repeatable:1 xt:5 r2:1 stl:7 intrinsic:1 workshop:1 mnist:5 sequential:1 gained:2 conditioned:2 egl:2 kx:2 entropy:15 hoos:1 simply:3 explore:4 likely:1 expressed:1 bo:2 srivastava:2 pedro:1 corresponds:1 leyton:1 relies:1 acm:2 conditional:1 goal:2 towards:1 change:1 specifically:2 movielens:2 reducing:2 yuval:1 usn:1 called:2 total:1 secondary:1 accepted:1 experimental:2 indicating:1 formally:1 select:1 cholesky:2 jonathan:1 brevity:1 incorporate:2 evaluate:9 mcmc:1 srinivas:1 correlated:7 |
4,516 | 5,087 | Efficient Optimization for
Sparse Gaussian Process Regression
Yanshuai Cao1
1
Marcus A. Brubaker2
David J. Fleet1
2
Department of Computer Science
University of Toronto
TTI-Chicago
Aaron Hertzmann1,3
3
Adobe Research
Abstract
We propose an efficient optimization algorithm for selecting a subset of training data to induce sparsity for Gaussian process regression. The algorithm estimates an inducing set and the hyperparameters using a single objective, either the
marginal likelihood or a variational free energy. The space and time complexity
are linear in training set size, and the algorithm can be applied to large regression
problems on discrete or continuous domains. Empirical evaluation shows state-ofart performance in discrete cases and competitive results in the continuous case.
1
Introduction
Gaussian Process (GP) learning and inference are computationally prohibitive with large datasets,
having time complexities O(n3 ) and O(n2 ), where n is the number of training points. Sparsification
algorithms exist that scale linearly in the training set size (see [10] for a review). They construct a
low-rank approximation to the GP covariance matrix over the full dataset using a small set of inducing points. Some approaches select inducing points from training points [7, 8, 12, 13]. But these
methods select the inducing points using ad hoc criteria; i.e., they use different objective functions to
select inducing points and to optimize GP hyperparameters. More powerful sparsification methods
[14, 15, 16] use a single objective function and allow inducing points to move freely over the input
domain which are learned via gradient descent. This continuous relaxation is not feasible, however,
if the input domain is discrete, or if the kernel function is not differentiable in the input variables.
As a result, there are problems in myraid domains, like bio-informatics, linguistics and computer
vision where current sparse GP regression methods are inapplicable or ineffective.
We introduce an efficient sparsification algorithm for GP regression. The method optimizes a single
objective for joint selection of inducing points and GP hyperparameters. Notably, it optimizes either
the marginal likelihood, or a variational free energy [15], exploiting the QR factorization of a partial Cholesky decomposition to efficiently approximate the covariance matrix. Because it chooses
inducing points from the training data, it is applicable to problems on discrete or continuous input
domains. To our knowledge, it is the first method for selecting discrete inducing points and hyperparameters that optimizes a single objective, with linear space and time complexity. It is shown
to outperform other methods on discrete datasets from bio-informatics and computer vision. On
continuous domains it is competitive with the Pseudo-point GP [14] (SPGP).
1.1 Previous Work
Efficient state-of-the-art sparsification methods are O(m2 n) in time and O(mn) in space for learning. They compute the predictive mean and variance in time O(m) and O(m2 ). Methods based on
continuous relaxation, when applicable, entail learning O(md) continuous parameters, where d is
the input dimension. In the discrete case, combinatorial optimization is required to select the inducing points, and this is, in general, intractable. Existing discrete sparsification methods therefore use
other criteria to greedily select inducing points [7, 8, 12, 13]. Although their criteria are justified,
each in their own way (e.g., [8, 12] take an information theoretic perspective), they are greedy and
do not use the same objective to select inducing points and to estimate GP hyperparameters.
1
The variational formulation of Titsias [15] treats inducing points as variational parameters, and gives
a unified objective for discrete and continuous inducing point models. In the continuous case, it uses
gradient-based optimization to find inducing points and hyperparameters. In the discrete case, our
method optimizes the same variational objective of Titsias [15], but is a significant improvement over
greedy forward selection using the variational objective as selection criteria, or some other criteria.
In particular, given the cost of evaluating the variational objective on all training points, Titsias [15]
evaluates the objective function on a small random subset of candidates at each iteration, and then
select the best element from the subset. This approximation is often slow to achieve good results,
as we explain and demonstrate below in Section 4.1. The approach in [15] also uses greedy forward
selection, which provides no way to refine the inducing set after hyperparameter optimization, except
to discard all previous inducing points and restart selection. Hence, the objective is not guaranteed
to decrease after each restart. By comparison, our formulation considers all candidates at each step,
and revisiting previous selections is efficient, and guaranteed to decrease the objective or terminate.
Our low-rank decomposition is inspired by the Cholesky with Side Information (CSI) algorithm for
kernel machines [1]. We extend that approach to GP regression. First, we alter the form of the lowrank matrix factorization in CSI to be suitable for GP regression with full-rank diagonal term in the
covariance. Second, the CSI algorithm selects inducing points in a single greedy pass using an approximate objective. We propose an iterative optimization algorithm that swaps previously selected
points with new candidates that are guaranteed to lower the objective. Finally, we perform inducing set selection jointly with gradient-based hyperparameter estimation instead of the grid search in
CSI. Our algorithm selects inducing points in a principled fashion, optimizing the variational free
energy or the log likelihood. It does so with time complexity O(m2 n), and in practice provides an
improved quality-speed trade-off over other discrete selection methods.
2
Sparse GP Regression
Let y ? R be the noisy output of a function, f , of input x. Let X = {xi }ni=1 denote n training
inputs, each belonging to input space D, which is not necessarily Euclidean. Let y ? Rn denote the
corresponding vector of training outputs. Under a full zero-mean GP, with the covariance function
E[yi yj ] = ?(xi , xj ) + ? 2 1[i = j] ,
(1)
2
where ? is the kernel function, 1[?] is the usual indicator function, and ? is the variance of the observation noise, the predictive distribution over the output f? at a test point x? is normally distributed.
The mean and variance of the predictive distribution can be expressed as
?1
T
?? = ?(x? ) K + ? 2 In
y
?1
T
2
v? = ?(x? , x? ) ? ?(x? ) K + ? 2 In
?(x? )
where In is the n ? n identity matrix, K is the kernel matrix whose ijth element is ?(xi , xj ), and
?(x? ) is the column vector whose ith element is ?(x? , xi ).
The hyperparameters of a GP, denoted ?, comprise the parameters of the kernel function, and the
noise variance ? 2 . The natural objective for learning ? is the negative marginal log likelihood
(NMLL) of the training data, ? log (P (y|X, ?)), given up to a constant by
?1
Efull (?) = ( y> K +? 2 In
y + log |K +? 2 In | ) / 2 .
(2)
The computational bottleneck lies in the O(n2 ) storage and O(n3 ) inversion of the full covariance
matrix, K + ? 2 In . To lower this cost with a sparse approximation, Csat?o and Opper [5] and Seeger
et al. [12] proposed the Projected Process (PP) model, wherein a set of m inducing points are used
to construct a low-rank approximation of the kernel matrix. In the discrete case, where the inducing
points are a subset of the training data, with indices I ? {1, 2, ..., n}, this approach amounts to
replacing the kernel matrix K with the following Nystr?om approximation [11]:
? = K[:, I] K[I, I]?1 K[I, :]
K ' K
(3)
where K[:, I] denotes the sub-matrix of K comprising columns indexed by I, and K[I, I] is the
sub-matrix of K comprising rows and columns indexed by I. We assume the rank of K is m or
higher so we can always find such rank-m approximations. The PP NMLL is then algebraically
? in Eq. (2), i.e.,
equivalent to replacing K with K
E(?, I) = E D (?, I) + E C (?, I) /2 ,
(4)
2
? +? 2 In )?1 y, and model complexity E C (?, I) = log |K
? +? 2 In |.
with data term E D (?, I) = y> (K
The computational cost reduction from O(n3 ) to O(m2 n) associated with the new likelihood is
achieved by applying the Woodbury inversion identity to E D (?, I) and E C (?, I). The objective
in (4) can be viewed as an approximate log likelihood for the full GP model, or as the exact log
likelihood for an approximate model, called the Deterministically Trained Conditional [10].
The same PP model can also be obtained by a variational argument, as in [15], for which the variational free energy objective can be shown to be Eq. (4) plus one extra term; i.e.,
F (?, I) = E D(?, I) + E C(?, I) + E V (?, I) / 2 ,
(5)
? arises from the variational formulation. It effectively regularizes
where E V (?, I) = ? ?2 tr(K?K)
the trace norm of the approximation residual of the covariance matrix. The kernel machine of [1]
? however ? is a free parameter that is set manually.
also uses a regularizer of the form ? tr(K ? K),
3
Efficient optimization
We now outline our algorithm for optimizing the variational free energy (5) to select the inducing set
I and the hyperparameters ?. (The negative log-likelihood (4) is similarly minimized by simply discarding the E V term.) The algorithm is a form of hybrid coordinate descent that alternates between
discrete optimization of inducing points, and continuous optimization of the hyperparameters. We
first describe the algorithm to select inducing points, and then discuss continuous hyperparameter
optimization and termination criteria in Sec. 3.4.
Finding the optimal inducing set is a combinatorial problem; global optimization is intractable.
Instead, the inducing set is initialized to a random subset of the training data, which is then refined
by a fixed number of swap updates at each iteration.1 In a single swap update, a randomly chosen
inducing point is considered for replacement. If swapping does not improve the objective, then the
original point is retained. There are n ? m potential replacements for each each swap update; the
key is to efficiently determine which will maximally improve the objective. With the techniques
described below, the computation time required to approximately evaluate all possible candidates
and swap an inducing point is O(mn). Swapping all inducing points once takes O(m2 n) time.
3.1
Factored representation
To support efficient evaluation of the objective and swapping, we use a factored representation of the
kernel matrix. Given an inducing set I of k points, for any k ? m, the low-rank Nystr?om approximation to the kernel matrix (Eq. 3) can be expressed in terms of a partial Cholesky factorization:
? = K[:, I] K[I, I]?1 K[I, :] = L(I)L(I)> ,
K
(6)
n?k
where L(I) ? R
is, up to permutation of rows, lower trapezoidal matrix (i.e., has a k ? k
top lower triangular block, again up to row permutation). The derivation of Eq. 6 follows from
Proposition 1 in [1], and the fact that, given the ordered sequence of pivots I, the partial Cholesky
factorization is unique.
Using this factorization and the Woodbury identities (dropping the dependence on ? and I for clarity), the terms of the negative marginal log-likelihood (4) and variational free energy (5) become
?1 >
E D = ? ?2 y> y ? y> L L> L + ? 2 I
L y
(7)
E C = log (? 2 )n?k |L> L + ? 2 I|
(8)
E V = ? ?2 (tr(K) ? tr(L> L))
(9)
e = [L> , ?Ik ]> , where
We can further simplify the data term by augmenting the factor matrix as L
T T
T
e = [y , 0k ] is the y vector with k zeroes appended:
Ik is the k?k identity matrix, and y
e (L
e > L)
e ?1 L
e> y
e> L
e
E D = ? ?2 y> y ? y
(10)
1
The inducing set can be incrementally constructed, as in [1], however we found no benefit to this.
3
e = QR be a QR factorization of L,
e where Q ? R(n+k)?k has orthogonal columns and
Now, let L
R ? Rk?k is invertible. The first two terms in the objective simplify further to
e k2
E D = ? ?2 kyk2 ? kQ> y
(11)
E C = (n ? k) log(? 2 ) + 2 log |R| .
(12)
3.2 Factorization update
Here we present the mechanics of the swap update algorithm, see [3] for pseudo-code. Suppose we
wish to swap inducing point i with candidate point j in Im , the inducing set of size m. We first
modify the factor matrices in order to remove point i from Im , i.e. to downdate the factors. Then
we update all the key terms using one step of Cholesky and QR factorization with the new point j.
Downdating to remove inducing point i requires that we shift the corresponding columns/rows in
e Q, R and to the last row of R. We can then simply
the factorization to the right-most columns of L,
discard these last columns and rows, and modify related quantities. When permuting the order of the
inducing points, the underlying GP model is invariant, but the matrices in the factored representation
are not. If needed, any two points in Im , can be permuted, and the Cholesky or QR factors can be
updated in time O(mn). This is done with the efficient pivot permutation presented in the Appendix
e In this way, downdating
of [1], with minor modifications to account for the augmented form of L.
and removing i take O(mn) time, as does the updating with point j.
e m?1 ,Qm?1 , Rm?1 , and inducing set Im?1 . To add j to Im?1 ,
After downdating, we have factors L
and update the factors to rank m, one step of Cholesky factorization is performed with point j, for
e is formed as
which, ideally, the new column to append to L
.q
? m?1 )[:, j]
? m?1 )[j, j]
`m = (K ? K
(K ? K
(13)
? m?1 = Lm?1 Lm?1 T . Then, we set L
e m = [L
e m?1 ?`m ], where ?`m is just `m augmented
where K
>
with ?em = [0, 0, ..., ?, ..., 0, 0] . The final updates are Qm = [Qm?1 qm ], where qm is given
>
?
?
by Gram-Schmidt orthogonalization, qm = ((I ? Qm?1 Q>
m?1 )`m ) / k(I ? Qm?1 Qm?1 )`m k, and
e m = Qm Rm .
Rm is updated from Rm?1 so that L
3.3 Evaluating candidates
Next we show how to select candidates for inclusion in the inducing set. We first derive the exact
change in the objective due to adding an element to Im?1 . Later we will provide an approximation
to this objective change that can be computed efficiently.
e m?1 , Qm?1 , and Rm?1 , we wish to evaluate the change
Given an inducing set Im?1 , and matrices L
in Eq. 5 for Im = Im?1 ? j. That is, ?F ? F (?, Im?1 )?F (?, Im ) = (?E D + ?E C + ?E V )/2,
where, based on the mechanics of the incremental updates above, one can show that
.
? 2
? 2 k I ? Qm?1 Q>
(14)
?E D = ? ?2 (e
y> I ? Qm?1 Q>
m?1 `m k
m?1 `m )
? 2
?E C = log ? 2 ? log k(I ? Qm?1 Q>
(15)
m?1 )`m k
?E V = ? ?2 k`m k2
(16)
This gives the exact decrease in the objective function after adding point j. For a single point this
evaluation is O(mn), so to evaluate all n ? m points would be O(mn2 ).
3.3.1 Fast approximate cost reduction
While O(mn2 ) is prohibitive, computing the exact change is not required. Rather, we only need a
ranking of the best few candidates. Thus, instead of evaluating the change in the objective exactly,
we use an efficient approximation based on a small number, z, of training points which provide
information about the residual between the current low-rank covariance matrix (based on inducing
points) and the full covariance matrix. After this approximation proposes a candidate, we use the
actual objective to decide whether to include it. The techniques below reduce the complexity of
evaluating all n ? m candidates to O(zn).
To compute the change in objective for one candidate, we need the new column of the updated
Cholesky factorization, `m . In Eq. (13) this vector is a (normalized) column of the residual
4
? m?1 between the full kernel matrix and the Nystr?om approximation. Now consider the
K ?K
full Cholesky decomposition of K = L? L?> where L? = [Lm?1 , L(Jm?1 )] is constructed with
Im?1 as the first pivots and Jm?1 = {1, ..., n}\Im?1 as the remaining pivots, so the resid? m?1 = L(Jm?1 )L(Jm?1 )> . We approximate L(Jm?1 ) by a rank z n
ual becomes K ? K
matrix, Lz , by taking z points from Jm?1 and performing a partial Cholesky factorization of
? m?1 using these pivots. The residual approximation becomes K ? K
? m?1 ? Lz L> , and
K ?K
z
.p
thus `m ? (Lz L>
(Lz L>
z )[:, j]
z )[j, j]. The pivots used to construct Lz are called information
pivots; their selection is discussed in Sec. 3.3.2.
The approximations to ?EkD , ?EkC and ?EkV , Eqs. (14)-(16), for all candidate points, involve
>
>
>
>
>
the following terms: diag(Lz L>
z Lz Lz ), y Lz Lz , and (Qk?1 [1 : n, :]) Lz Lz . The first term
2
can be computed in time O(z n), and the other two in O(zmn) with careful ordering of matrix
multiplications.2 Computing Lz costs O(z 2 n), but can be avoided since information pivots change
by at most one when an information pivots is added to the inducing set and needs to be replaced.
The techniques in Sec. 3.2 bring the associated update cost to O(zn) by updating Lz rather than
recomputing it. These z information pivots are equivalent to the ?look-ahead? steps of Bach and
Jordan?s CSI algorithm, but as described in Sec. 3.3.2, there is a more effective way to select them.
3.3.2
Ensuring a good approximation
Selection of the information pivots determines the approximate objective, and hence the candidate
proposal. To ensure a good approximation, the CSI algorithm [1] greedily selects points to find
? m?1 in Eq. (13) that is optimal in terms of a bound of
an approximation of the residual K ? K
the trace norm. The goal, however, is to approximate Eqs. (14)-(16) . By analyzing the role of
the residual matrix, we see that the information pivots provide a low-rank approximation to the
orthogonal complement of the space spanned by current inducing set. With a fixed set of information
pivots, parts of that subspace may never be captured. This suggests that we might occasionally
update the entire set of information pivots. Although information pivots are changed when one is
moved into the inducing set, we find empirically that this is not insufficient. Instead, at regular
intervals we replace the entire set of information pivots by random selection. We find this works
better than optimizing the information pivots as in [1].
ranking approx total reduction
Figure 1 compares the exact and approximate
cost reduction for candidate inducing points
(left), and their respective rankings (right). The
approximation is shown to work well. It is also
robust to changes in the number of information
pivots and the frequency of updates. When bad
candidates are proposed, they are rejected after
ranking exact total reduction
exact total reduction
evaluating the change in the true objective. We
Figure
1:
Exact
vs
approximate
costs, based on
find that rejection rates are typically low during
the
1D
example
of
Sec.
4,
with
z
=
10, n = 200.
early iterations (< 20%), but increase as optimization nears convergence (to 30% or 40%). Rejection rates also increase for sparser models,
where each inducing point plays a more critical role and is harder to replace.
approx total reduction
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
3.4
150
100
50
0
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0
50
100
150
Hybrid optimization
The overall hybrid optimization procedure performs block coordinate descent in the inducing points
and the continuous hyperparameters. It alternates between discrete and continuous phases until
improvement in the objective is below a threshold or the computational time budget is exhausted.
In the discrete phase, inducing points are considered for swapping with the hyper-parameters fixed.
With the factorization and efficient candidate evaluation above, swapping an inducing point i ? Im
proceeds as follows: (I) down-date the factorization matrices as in Sec. 3.2 to remove i; (II) compute
the true objective function value Fm?1 over the down-dated model with Im \{i}, using (11), (12)
and (9); (III) select a replacement candidate using the fast approximate cost change from Sec. 3.3.1;
(IV) evaluate the exact objective change, using (14), (15), and (16); (V) add the exact change to the
true objective Fm?1 to get the objective value with the new candidate. If this improves, we include
2
e and Lz
Both can be further reduced to O(zn) by appropriate caching during the updates of Q,R and L,
5
CholQR?z16
IVM
Random
Titsias?16
Titsias?512
Testing SNLP
?0.3
0.7
Testing SMSE
?0.2
?0.4
?0.5
?0.6
32
64
128
256
number of inducing points (m)
?0.4
0.4
16
512
CholQR?z16
IVM
Random
Titsias?16
Titsias?512
?0.6
Testing SNLP
0.5
0.3
32
64
128
256
number of inducing points (m)
512
32
64
128
256
number of inducing points (m)
512
0.4
0.35
Testing SMSE
?0.7
16
0.6
?0.8
?1
0.3
0.25
0.2
0.15
?1.2
0.1
16
32
64
128
256
number of inducing points (m)
16
512
Figure 2: Test performance on discrete datasets. (top row) BindingDB, values at each marker is the
average of 150 runs (50-fold random train/test splits times 3 random initialization); (bottom row)
HoG dataset, each marker is the average of 10 randomly initialized runs.
the candidate in I and update the matrices as in Sec. 3.2. Otherwise it is rejected and we revert to
the factorization with i; (VI) if needed, update the information pivots as in Secs. 3.3.1 and 3.3.2.
After each discrete optimization step we fix the inducing set I and optimize the hyperparameters
using non-linear conjugate gradients (CG). The equivalence in (6) allows us to compute the gradient
with respect to the hyperparameters analytically using the Nystr?om form. In practice, because we
alternate each phase for many training epochs, attempting to swap every inducing point in each
epoch is unnecessary, just as there is no need to run hyperparameter optimization until convergence.
As long as all inducing set points are eventually considered we find that optimized models can
achieve similar performance with shorter learning times.
4
Experiments and analysis
For the experiments that follow we jointly learn inducing points and hyperparameters, a more challenging task than learning inducing points with known hyperparameters [12, 14]. For all but the 1D
example, the number of inducing points swapped per epoch is min(60, m). The maximum number of function evaluations per epoch in CG hyperparameter optimization is min(20, max(15, 2d)),
where d is the number of continuous hyperparameters. Empirically we find the algorithm is robust
to changes in these limits. We use two performance measures, (a) standardized mean square eryt ? yt )2 /?
??2 , where ?
??2 is the sample variance of test outputs {yt }, and (2)
ror (SMSE), N1 ?N
t=1 (?
standardized negative log probability (SNLP) defined in [11].
4.1 Discrete input domain
We first show results on two discrete datasets with kernels that are not differentiable in the input
variable x. Because continuous relaxation methods are not applicable, we compare to discrete selection methods, namely, random selection as baseline (Random), greedy subset-optimal selection
of Titsias [15] with either 16 or 512 candidates (Titsias-16 and Titsias-512), and Informative Vector Machine [8] (IVM). For learning continuous hyperparameters, each method optimizes the same
objective using non-linear CG. Care is taken to ensure consist initialization and termination criteria
[3]. For our algorithm we use z = 16 information pivots with random selection (CholQR-z16).
Later, we show how variants of our algorithm trade-off speed and performance. Additionally, we
also compare to least-square kernel regression using CSI (in Fig. 3(c)).
The first discrete dataset, from bindingdb.org, concerns the prediction of binding affinity for a
target (Thrombin), from the 2D chemical structure of small molecules (represented as graphs). We
do 50-fold random splits to 3660 training points and 192 test points for repeated runs. We use a
compound kernel, comprising 14 different graph kernels, and 15 continuous hyperparameters (one
6
3
10
1000
500
0.2
2
10
32
64
128
256
number of inducing points (m)
0
16
512
0.1
32
64
128
256
number of inducing points (m)
(a)
1
2
(b)
(c)
CholQR?z16
IVM
Random
Titsias?16
Titsias?512
0.75
Testing SMSE
?0.1
4
0.144
CholQR?z16
IVM
Random
Titsias?16
Titsias?512
0
3
10 10 10 10
Time in secs (log scaled)
512
?0.2
0.7
0.65
Testing SMSE
16
Testing SNLP
CholQR?z8
CholQR?z16
CholQR?OI?z16
CholQR?z64
CholQR?OI?z64
CholQR?AA?z128
IVM
Random
Titsias?16
Titsias?512
CSI
0.3
Testing SMSE
10
Training VAR
Total training time (secs)
CholQR?z16
IVM
Random
Titsias?16
Titsias?512
4
0.6
0.142
0.14
0.55
0.138
?0.3
0
1
2
3
4
10
10
10
10
10
Cumulative training time in secs (log scale)
(d)
0
1
2
3
4
10
10
10
10
10
Cumulative training time in secs (log scale)
(e)
1
10
2
10
Time in secs (log scaled)
(f)
Figure 3: Training time versus test performance on discrete datasets. (a) the average BindingDB
training time; (b) the average BindingDB objective function value at convergence; (d) and (e) show
test scores versus training time with m = 32 for a single run; (c) shows the trade-off between training
time and testing SMSE on the HoG dataset with m = 32, for various methods including multiple
variants of CholQR and CSI; (f) a zoomed-in version of (c) comparing the variants of CholQR.
noise variance and 14 data variances). In the second task, from [2], the task is to predict 3D human
joint position from histograms of HoG image features [6]. Training and test sets have 4819 and
4811 data points. Because our goal is the general purpose sparsification method for GP regression,
we make no attempt at the more difficult problem of modelling the multivariate output structure in
the regression as in [2]. Instead, we predict the vertical position of joints independently, using a
histogram intersection kernel [9], having four hyperparameters: one noise variance, and three data
variances corresponding to the kernel evaluated over the HoG from each of three cameras. We select
and show result on the representative left wrist here (see [3] for others joints, and more details about
the datasets and kernels used).
The results in Fig. 2 and 3 show that CholQR-z16 outperforms the baseline methods in terms of
test-time predictive power with significantly lower training time. Titsias-16 and Titsias-512 shows
similar test performance, but they are two to four orders of magnitude slower than CholQR-z16 (see
Figs. 3(d) and 3(e)). Indeed, Fig. 3(a) shows that the training time for CholQR-z16 is comparable to
IVM and Random selection, but with much better performance. The poor performance of Random
selection highlights the importance of selecting good inducing points, as no amount of hyperparameter optimization can correct for poor inducing points. Fig. 3(a) also shows IVM to be somewhat
slower due to the increased number of iterations needed, even though per epoch, IVM is faster than
CholQR. When stopped earlier, IVM test performance further degrades.
Finally, Fig. 3(c) and 3(f) show the trade-off between the test SMSE and training time for variants of
CholQR, with baselines and CSI kernel regression [1]. For CholQR we consider different numbers
of information pivots (denoted z8, z16, z64 and z128), and different strategies for their selection including random selection, optimization as in [1] (denote OI) and adaptively growing the information
pivot set (denoted AA, see [3] for details). These variants of CholQR trade-off speed and performance (3(f)), all significantly outperform the other methods (3(c)); CSI, which uses grid search to
select hyper-parameters, is slow and exhibits higher SMSE.
4.2 Continuous input domain
Although CholQR was developed for discrete input domains, it can be competitive on continuous
domains. To that end, we compare to SPGP [14] and IVM [8], using RBF kernels with one lengthPd
(t)
(t)
scale parameter per input dimension; ?(xi , xj ) = c exp(?0.5 t=1 bt (xi ? xj )2 ). We show
results from both the PP log likelihood and variational objectives, suffixed by MLE and VAR.
7
(a) CholQR-MLE
(b) CholQR-MLE
(c) SPGP
(d) CholQR-VAR
(e) CholQR-VAR
(f) SPGP
Figure 4: Snelson?s 1D example: prediction mean (red curves); one standard deviation in prediction
uncertainty (green curves); inducing point initialization (black points at top of each figure); learned
inducing point locations (the cyan points at the bottom, also overlaid on data for CholQR).
CholQR?MLE
CholQR?VAR
SPGP
IVM?MLE
IVM?VAR
testing SMSE
0.2
?0.5
testing SNLP
0.25
0.15
0.1
?1
?1.5
?2
0.05
128
256
512
1024
?2.5
128
2048
256
512
1024
2048
Figure 5: Test scores on KIN40K as function of number of inducing points: for each number of
inducing points the value plotted is averaged over 10 runs from 10 different (shared) initializations.
We use the 1D toy dataset of [14] to show how the PP likelihood with gradient-based optimization
of inducing points is easily trapped in local minima. Fig. 4(a) and 4(d) show that for this dataset
our algorithm does not get trapped when initialization is poor (as in Fig. 1c of [14]). To simulate
the sparsity of data in high-dimensional problems we also down-sample the dataset to 20 points
(every 10th point). Here CholQR out-performs SPGP (see Fig. 4(b), 4(e), and 4(c)). By comparison,
Fig. 4(f) shows SPGP learned with a more uniform initial distribution of inducing points avoids this
local optima and achieves a better negative log likelihood of 11.34 compared to 14.54 in Fig. 4(c).
Finally, we compare CholQR to SPGP [14] and IVM [8] on a large dataset. KIN40K concerns
nonlinear forward kinematic prediction. It has 8D real-valued inputs and scalar outputs, with 10K
training and 30K test points. We perform linear de-trending and re-scaling as pre-processing. For
SPGP we use the implementation of [14]. Fig. 5 shows that CholQR-VAR outperforms IVM in terms
of SMSE and SNLP. Both CholQR-VAR and CholQR-MLE outperform SPGP in terms of SMSE on
KIN40K with large m, but SPGP exhibits better SNLP. This disparity between the SMSE and SNLP
measures for CholQR-MLE is consistent with findings about the PP likelihood in [15]. Recently,
Chalupka et al. [4] introduced an empirical evaluation framework for approximate GP methods,
and showed that subset of data (SoD) often compares favorably to more sophisticated sparse GP
methods. Our preliminary experiments using this framework suggest that CholQR outperforms
SPGP in speed and predictive scores; and compared to SoD, CholQR is slower during training, but
proportionally faster during testing since CholQR finds a much sparser model to achieve the same
predictive scores. In future work, we will report results on the complete suit of benchmark tests.
5
Conclusion
We describe an algorithm for selecting inducing points for Gaussian Process sparsification. It optimizes principled objective functions, and is applicable to discrete domains and non-differentiable
kernels. On such problems it is shown to be as good as or better than competing methods and, for
methods whose predictive behavior is similar, our method is several orders of magnitude faster. On
continuous domains the method is competitive. Extension to the SPGP form of covariance approximation would be interesting future research.
8
References
[1] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. ICML,
pp. 33?40, 2005..
[2] L. Bo and C. Sminchisescu. Twin gaussian processes for structured prediction. IJCV, 87:28?
52, 2010.
[3] Y. Cao, M. A. Brubaker, D. J. Fleet, and A. Hertzmann. Project page: supplementary material and software for efficient optimization for sparse gaussian process regression.
www.cs.toronto.edu/?caoy/opt_sgpr, 2013.
[4] K. Chalupka, C. K. I. Williams, and I. Murray. A framework for evaluating approximation
methods for gaussian process regression. JMLR, 14(1):333?350, February 2013.
[5] L. Csat?o and M. Opper. Sparse on-line gaussian processes. Neural Comput., 14:641?668,
2002.
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. IEEE CVPR,
pp. 886?893, 2005.
[7] S. S. Keerthi and W. Chu. A matching pursuit approach to sparse gaussian process regression.
NIPS 18, pp. 643?650. 2006.
[8] N. D. Lawrence, M. Seeger, and R. Herbrich, Fast sparse gaussian process methods: The
informative vector machine. NIPS 15, pp. 609?616. 2003.
[9] J. J. Lee. Libpmk: A pyramid match toolkit. TR: MIT-CSAIL-TR-2008-17, MIT CSAIL,
2008. URL http://hdl.handle.net/1721.1/41070.
[10] J. Qui?nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian
process regression. JMLR, 6:1939?1959, 2005.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. Adaptive
computation and machine learning. MIT Press, 2006.
[12] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse
gaussian process regression. AI & Stats. 9, 2003.
[13] A. J. Smola and P. Bartlett. Sparse greedy gaussian process regression. In Advances in Neural
Information Processing Systems 13, pp. 619?625. 2001.
[14] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. NIPS 18, pp.
1257?1264. 2006.
[15] M. K. Titsias. Variational learning of inducing variables in sparse gaussian processes. JMLR,
5:567?574, 2009.
[16] C. Walder, K. I. Kwang, and B. Sch?olkopf. Sparse multiscale gaussian process regression.
ICML, pp. 1112?1119, 2008.
9
| 5087 |@word version:1 inversion:2 dalal:1 norm:2 triggs:1 termination:2 covariance:9 decomposition:4 nystr:4 tr:6 harder:1 reduction:7 initial:1 score:4 selecting:4 disparity:1 outperforms:3 existing:1 current:3 comparing:1 chu:1 chicago:1 informative:2 remove:3 update:15 v:1 greedy:6 prohibitive:2 selected:1 ith:1 provides:2 toronto:2 location:1 herbrich:1 org:1 constructed:2 become:1 ik:2 ijcv:1 introduce:1 notably:1 indeed:1 behavior:1 mechanic:2 growing:1 inspired:1 actual:1 jm:6 becomes:2 project:1 underlying:1 developed:1 unified:1 finding:2 sparsification:7 pseudo:3 every:2 exactly:1 k2:2 qm:14 rm:5 bio:2 normally:1 scaled:2 local:2 treat:1 modify:2 limit:1 analyzing:1 approximately:1 might:1 plus:1 black:1 initialization:5 equivalence:1 suggests:1 challenging:1 factorization:15 averaged:1 unique:1 woodbury:2 camera:1 yj:1 testing:12 practice:2 block:2 wrist:1 procedure:1 empirical:2 resid:1 significantly:2 matching:1 pre:1 induce:1 regular:1 suggest:1 get:2 selection:20 storage:1 applying:1 optimize:2 equivalent:2 www:1 yt:2 williams:3 independently:1 stats:1 m2:5 factored:3 spanned:1 handle:1 coordinate:2 updated:3 target:1 suppose:1 play:1 exact:10 us:4 element:4 updating:2 trapezoidal:1 bottom:2 role:2 revisiting:1 ordering:1 decrease:3 trade:5 csi:11 principled:2 complexity:6 hertzmann:1 ideally:1 spgp:13 trained:1 ror:1 predictive:8 titsias:21 inapplicable:1 swap:8 easily:1 joint:4 represented:1 various:1 regularizer:1 derivation:1 train:1 revert:1 mn2:2 describe:2 fast:4 effective:1 hyper:2 refined:1 whose:3 supplementary:1 valued:1 cvpr:1 otherwise:1 triangular:1 gp:18 jointly:2 noisy:1 final:1 hoc:1 sequence:1 differentiable:3 net:1 propose:2 zoomed:1 cao:1 nonero:1 date:1 achieve:3 inducing:70 moved:1 olkopf:1 qr:5 exploiting:1 convergence:3 optimum:1 incremental:1 tti:1 derive:1 augmenting:1 minor:1 lowrank:1 eq:9 c:1 correct:1 human:2 material:1 fix:1 preliminary:1 proposition:1 im:15 extension:1 considered:3 exp:1 overlaid:1 lawrence:2 predict:2 lm:3 achieves:1 early:1 purpose:1 estimation:1 applicable:4 combinatorial:2 mit:3 gaussian:17 always:1 rather:2 caching:1 downdating:3 improvement:2 rank:12 likelihood:13 modelling:1 seeger:3 greedily:2 cg:3 baseline:3 inference:1 nears:1 entire:2 typically:1 bt:1 selects:3 comprising:3 overall:1 denoted:3 proposes:1 art:1 marginal:4 comprise:1 construct:3 having:2 once:1 never:1 manually:1 look:1 icml:2 alter:1 future:2 minimized:1 others:1 report:1 simplify:2 few:1 randomly:2 oriented:1 replaced:1 phase:3 replacement:3 keerthi:1 n1:1 suit:1 attempt:1 hdl:1 trending:1 detection:1 kinematic:1 evaluation:6 swapping:5 permuting:1 partial:4 respective:1 shorter:1 orthogonal:2 indexed:2 iv:1 euclidean:1 initialized:2 re:1 plotted:1 stopped:1 recomputing:1 column:10 increased:1 earlier:1 zn:3 ijth:1 cost:9 deviation:1 subset:7 kq:1 uniform:1 sod:2 chooses:1 adaptively:1 csail:2 lee:1 off:5 informatics:2 invertible:1 again:1 toy:1 account:1 potential:1 de:1 sec:14 twin:1 z16:12 ranking:4 ad:1 vi:1 performed:1 later:2 view:1 candela:1 red:1 competitive:4 om:4 formed:1 square:2 ni:1 appended:1 variance:9 qk:1 efficiently:3 oi:3 explain:1 evaluates:1 energy:6 pp:13 frequency:1 associated:2 dataset:8 knowledge:1 improves:1 sophisticated:1 higher:2 follow:1 wherein:1 improved:1 maximally:1 formulation:3 done:1 evaluated:1 though:1 just:2 rejected:2 smola:1 until:2 replacing:2 nonlinear:1 marker:2 multiscale:1 incrementally:1 quality:1 normalized:1 true:3 hence:2 analytically:1 chemical:1 during:4 kyk2:1 criterion:7 outline:1 theoretic:1 demonstrate:1 complete:1 performs:2 bring:1 orthogonalization:1 image:1 variational:15 snelson:2 recently:1 permuted:1 empirically:2 extend:1 discussed:1 significant:1 ai:1 approx:2 grid:2 similarly:1 inclusion:1 toolkit:1 entail:1 add:2 chalupka:2 multivariate:1 own:1 showed:1 perspective:1 optimizing:3 optimizes:6 discard:2 occasionally:1 compound:1 yi:1 captured:1 minimum:1 care:1 somewhat:1 freely:1 algebraically:1 determine:1 downdate:1 ii:1 full:8 multiple:1 faster:3 match:1 bach:2 long:1 mle:7 adobe:1 ensuring:1 variant:5 regression:19 prediction:5 vision:2 iteration:4 kernel:22 histogram:3 pyramid:1 achieved:1 justified:1 proposal:1 interval:1 sch:1 extra:1 swapped:1 ineffective:1 jordan:2 libpmk:1 iii:1 split:2 xj:4 fm:2 competing:1 reduce:1 shift:1 pivot:22 bottleneck:1 whether:1 fleet:1 bartlett:1 url:1 proportionally:1 involve:1 amount:2 reduced:1 http:1 outperform:3 exist:1 trapped:2 per:4 csat:2 discrete:24 hyperparameter:6 dropping:1 key:2 four:2 threshold:1 clarity:1 graph:2 relaxation:3 run:6 powerful:1 uncertainty:1 decide:1 appendix:1 scaling:1 qui:1 comparable:1 bound:1 cyan:1 guaranteed:3 fold:2 refine:1 zmn:1 ahead:1 n3:3 software:1 speed:5 argument:1 min:2 simulate:1 performing:1 attempting:1 department:1 structured:1 alternate:3 poor:3 belonging:1 conjugate:1 em:1 modification:1 invariant:1 taken:1 computationally:1 previously:1 discus:1 eventually:1 needed:3 end:1 pursuit:1 appropriate:1 schmidt:1 slower:3 original:1 denotes:1 top:3 linguistics:1 include:2 remaining:1 ensure:2 standardized:2 unifying:1 ghahramani:1 murray:1 february:1 objective:39 move:1 added:1 quantity:1 degrades:1 strategy:1 dependence:1 md:1 diagonal:1 usual:1 exhibit:2 gradient:7 affinity:1 subspace:1 restart:2 considers:1 marcus:1 code:1 index:1 retained:1 insufficient:1 difficult:1 hog:4 favorably:1 trace:2 negative:5 append:1 nmll:2 implementation:1 perform:2 vertical:1 observation:1 datasets:6 benchmark:1 walder:1 descent:3 regularizes:1 rn:1 brubaker:1 david:1 complement:1 namely:1 required:3 introduced:1 optimized:1 learned:3 nip:3 proceeds:1 below:4 smse:13 kin40k:3 sparsity:2 max:1 including:2 green:1 power:1 suitable:1 critical:1 natural:1 hybrid:3 ual:1 bindingdb:4 indicator:1 residual:6 mn:5 improve:2 dated:1 review:1 epoch:5 multiplication:1 permutation:3 highlight:1 interesting:1 var:8 versus:2 ekv:1 consistent:1 row:8 changed:1 last:2 free:7 rasmussen:2 side:1 allow:1 taking:1 kwang:1 sparse:15 distributed:1 benefit:1 curve:2 dimension:2 opper:2 evaluating:6 gram:1 z8:2 cumulative:2 avoids:1 forward:4 adaptive:1 projected:1 avoided:1 lz:15 thrombin:1 approximate:13 global:1 unnecessary:1 xi:6 continuous:20 iterative:1 search:2 additionally:1 terminate:1 learn:1 robust:2 molecule:1 sminchisescu:1 necessarily:1 domain:12 diag:1 linearly:1 noise:4 hyperparameters:18 n2:2 repeated:1 augmented:2 fig:12 representative:1 fashion:1 slow:2 sub:2 position:2 deterministically:1 wish:2 comput:1 candidate:20 lie:1 jmlr:3 rk:1 removing:1 down:3 bad:1 discarding:1 concern:2 intractable:2 consist:1 adding:2 effectively:1 importance:1 magnitude:2 budget:1 exhausted:1 sparser:2 rejection:2 intersection:1 simply:2 expressed:2 ordered:1 scalar:1 bo:1 binding:1 aa:2 ivm:16 determines:1 conditional:1 identity:4 viewed:1 goal:2 careful:1 rbf:1 replace:2 shared:1 feasible:1 change:13 except:1 called:2 total:5 pas:1 aaron:1 select:14 cholesky:10 support:1 arises:1 evaluate:4 |
4,517 | 5,088 | Variational Inference for Mahalanobis
Distance Metrics in Gaussian Process Regression
Michalis K. Titsias
Department of Informatics
Athens University of Economics and Business
[email protected]
Miguel L?azaro-Gredilla
Dpt. Signal Processing & Communications
Universidad Carlos III de Madrid - Spain
[email protected]
Abstract
We introduce a novel variational method that allows to approximately integrate
out kernel hyperparameters, such as length-scales, in Gaussian process regression.
This approach consists of a novel variant of the variational framework that has
been recently developed for the Gaussian process latent variable model which additionally makes use of a standardised representation of the Gaussian process. We
consider this technique for learning Mahalanobis distance metrics in a Gaussian
process regression setting and provide experimental evaluations and comparisons
with existing methods by considering datasets with high-dimensional inputs.
1
Introduction
Gaussian processes (GPs) have found many applications in machine learning and statistics ranging
from supervised learning tasks to unsupervised learning and reinforcement learning. However, while
GP models are advertised as Bayesian models, it is rarely the case that a full Bayesian procedure is
considered for training. In particular, the commonly used procedure is to find point estimates over
the kernel hyperparameters by maximizing the marginal likelihood, which is the likelihood obtained
once the latent variables associated with the GP function have been integrated out (Rasmussen and
Williams, 2006). Such a procedure provides a practical algorithm that is expected to be robust to
overfitting when the number of hyperparameters that need to be tuned are relatively few compared
to the amount of data. In contrast, when the number of hyperparameters is large this approach will
suffer from the shortcomings of a typical maximum likelihood method such as overfitting. To avoid
the above problems, in GP models, the use of kernel functions with few kernel hyperparameters
is common practice, although this can lead to limited flexibility when modelling the data. For
instance, in regression or classification problems with high dimensional input data the typical kernel
functions used are restricted to have the simplest possible form, such as a squared exponential with
common length-scale across input dimensions, while more complex kernel functions such as ARD
or Mahalanobis kernels (Vivarelli and Williams, 1998) are not considered due to the large number
of hyperparameters needed to be estimated by maximum likelihood. On the other hand, while full
Bayesian inference for GP models could be useful, it is pragmatically a very challenging task that
currently has been attempted only by using expensive MCMC techniques such as the recent method
of Murray and Adams (2010). Deterministic approximations and particularly the variational Bayes
framework has not been applied so far for the treatment of kernel hyperparameters in GP models.
To this end, in this work we introduce a variational method for approximate Bayesian inference over
hyperparameters in GP regression models with squared exponential kernel functions. This approach
consists of a novel variant of the variational framework introduced in (Titsias and Lawrence, 2010)
for the Gaussian process latent variable model. Furthermore, this method uses the concept of a
standardised GP process and allows for learning Mahalanobis distance metrics (Weinberger and
Saul, 2009; Xing et al., 2003) in Gaussian process regression settings using Bayesian inference. In
1
the experiments, we compare the proposed algorithm with several existing methods by considering
several datasets with high-dimensional inputs.
The remainder of this paper is organised as follows: Section 2 provides the motivation and theoretical foundation of the variational method, Section 3 demonstrates the method in a number of
challenging regression datasets by providing also a comprehensive comparison with existing methods. Finally, the paper concludes with a discussion in Section 4.
2
Theory
Section 2.1 discusses Bayesian GP regression and motivates the variational method. Section 2.2
explains the concept of the standardised representation of a GP model that is used by the variational
method described in Section 2.3. Section 2.4 discusses setting the prior over the kernel hyperparameters together with a computationally analytical way to reduce the number of parameters to be
optimised during variational inference. Finally, Section 2.5 discusses prediction in novel test inputs.
2.1
Bayesian GP regression and motivation for the variational method
Suppose we have data {yi , xi }ni=1 , where each xi ? RD and each yi is a real-valued scalar output.
We denote by y the vector of all output data and by X all input data. In GP regression, we assume
that each observed output is generated according to yi = f (xi ) + i , i ? N (0, ? 2 ), where the
full length latent function f (x) is assigned a zero-mean GP prior with a certain covariance or kernel
function kf (x, x0 ) that depends on hyperparameters ?. Throughout the paper we will consider the
following squared exponential kernel function
1
0 T
kf (x, x0 ) = ?f2 e? 2 (x?x )
WT W(x?x0 )
1
0
2
1
2
0
= ?f2 e? 2 ||Wx?Wx || = ?f2 e? 2 dW (x,x ) ,
(1)
where dW (x, x0 ) = ||Wx ? Wx0 ||. In the above, ?f is a global scale parameter while the matrix
W ? RK?D quantifies a linear transformation that maps x into a linear subspace with dimension
at most K. In the special case where W is a square and diagonal matrix, the above kernel function
reduces to
PD
2
0 2
1
(2)
kf (x, x0 ) = ?f2 e? 2 d=1 wd (xd ?xd ) ,
which consists of the well-known ARD squared exponential kernel commonly used in GP regression applications (Rasmussen and Williams, 2006). In other cases where K < D, dW (x, x0 ) defines
a Mahalanobis distance metric (Weinberger and Saul, 2009; Xing et al., 2003) that allows for supervised dimensionality reduction to be applied in a GP regression setting (Vivarelli and Williams,
1998).
In a full Bayesian formulation, the hyperparameters ? = (?f , W) are assigned a prior distribution
p(?) and the Bayesian model follows the hierarchical structure depicted in Figure 1(a). According
to this structure the random function f (x) and the hyperparameters ? are a priori coupled since the
former quantity is generated conditional on the latter. This can make approximate, and in particular
variational, inference over the hyperparameters to be troublesome. To clarify this, observe that the
joint density induced by the finite data is
p(y, f , ?) = N (y|f , ? 2 I)N (f |0, Kf ,f )p(?),
(3)
where the vector f stores the latent function values at inputs X and Kf ,f is the n ? n kernel matrix
obtained by evaluating the kernel function on those inputs. Clearly, in the term N (f |0, Kf ,f ) the
hyperparameters ? appear non-linearly inside the inverse and determinant of the kernel matrix Kf ,f .
While there exist a recently developed variational inference method applied to Gaussian process
latent variable model (GP-LVM) (Titsias and Lawrence, 2010), that approximately integrates out
inputs that appear inside a kernel matrix, this method is still not applicable to the case of kernel
hyperparameters such as length-scales. This is because the augmentation with auxiliary variables
used in (Titsias and Lawrence, 2010), that allows to bypass the intractable term N (f |0, Kf ,f ), leads
to an inversion of a matrix Ku,u that still depends on the kernel hyperparameters. More precisely,
the Ku,u matrix is defined on auxiliary values u comprising points of the function f (x) at some
arbitrary and freely optimisable inputs (Snelson and Ghahramani, 2006a; Titsias, 2009). While this
kernel matrix does not depend on the inputs X any more (which need to be integrated out in the
GP-LVM case), it still depends on ?, making a possible variational treatment of those parameters
2
intractable. In Section 2.3, we present a novel modification of the approach in (Titsias and Lawrence,
2010) which allows to overcome the above intractability. Such an approach makes use of the socalled standardised representation of the GP model that is described next.
2.2
The standardised representation
Consider a function s(z), where z ? RK , which is taken to be a random sample drawn from a GP
indexed by elements in the low K-dimensional space and assumed to have a zero mean function and
the following squared exponential kernel function:
1
0
2
ks (z, z0 ) = e? 2 ||z?z || ,
(4)
where the kernel length-scales and global scale are equal to unity. The above GP is referred to as
standardised process, whereas a sample path s(z) is referred to as a standardised function. The interesting property that a standardised process has is that it does not depend on kernel hyperparameters
since it is defined in a space where all hyperparameters have been neutralised to take the value one.
Having sampled a function s(z) in the low dimensional input space RK , we can deterministically
express a function f (x) in the high dimensional input space RD according to
f (x) = ?f s(Wx),
(5)
K?D
where the scalar ?f and the matrix W ? R
are exactly the hyperparameters defined in the
previous section. The above simply says that the value of f (x) at a certain input x is the value of the
standardised function s(z), for z = Wx ? RK , times a global scale ?f that changes the amplitude
or power of the new function. Given (?f , W), the above assumptions induce a GP prior on the
function f (x), which has zero mean and the following kernel function
1
0
2
kf (x, x0 ) = E[?f s(Wx)?f s(Wx0 )] = ?f2 e? 2 dW (x,x ) ,
(6)
which is precisely the kernel function given in eq. (1) and therefore, the above construction leads to
the same GP prior distribution described in Section 2.1. Nevertheless, the representation using the
standardised process also implies a reparametrisation of the GP regression model where a priori the
hyperparameters ? and the GP function are independent. More precisely, one can now represent the
GP model according to the following structure:
s(z) ? GP(0, ks (z, z0 )), ? ? p(?)
f (x) = ?f s(Wx)
yi
?
N (yi |f (xi ), ? 2 ), i = 1, . . . , n
(7)
which is depicted graphically in Figure 1(b). The interesting property of this representation is that
the GP function s(z) and the hyperparameters ? interact only inside the likelihood function while
a priori are independent. Furthermore, according to this representation one could now consider a
modification of the variational method in (Titsias and Lawrence, 2010) so that the auxiliary variables
u are defined to be points of the function s(z) so that the resulting kernel matrix Ku,u which needs
to be inverted does not depend on the hyperparameters but only on some freely optimisable inputs.
Next we discuss the details of this variational method.
2.3
Variational inference using auxiliary variables
We define a set of m auxiliary variables u ? Rm such that each ui is a value of the standardised
function so that ui = s(zi ) and the input zi ? RK lives in dimension K. The set of all inputs
Z = (z1 , . . . , zm ) are referred to as inducing inputs and consist of freely-optimisable parameters
that can improve the accuracy of the approximation. The inducing variables u follow the Gaussian
density
p(u) = N (u|0, Ku,u ),
(8)
where [Ku,u ]ij = ks (zi , zj ) and ks is the standardised kernel function given by eq. (4). Notice that
the density p(u) does not depend on the kernel hyperparameters and particularly on the matrix W.
This is a rather critical point, that essentially allows the variational method to be applicable, and
comprise the novelty of our method compared to the initial framework in (Titsias and Lawrence,
2010). The vector f of noise-free latent function values, such that [f ]i = ?f s(Wxi ), covary with
the vector u based on the cross-covariance function
1
2
kf,u (x, z) = E[?f s(Wx)s(z)] = ?f E[s(Wx)s(z)] = ?f e? 2 ||Wx?z|| = ?f ks (Wx, z).
3
(9)
4.5
?
s(x)
4
?
5
3.5
10
f (x)
Input dimension
f (x)
Relevance
3
2.5
2
15
20
1.5
25
1
0.5
y
30
y
0
1
2
3
4
5
6
7
8
9 10
2
Latent dimension (sorted)
(a)
(b)
4
6
8
10
Latent dimension (sorted)
(c)
Figure 1: The panel in (a) shows the usual hierarchical structure of a GP model where the middle node corresponds to the full length function f (x) (although only a finite vector f is associated with the data). The
panel in (b) shows an equivalent representation of the GP model expressed through the standardised random function s(z), that does not depend on hyperparameters, and interacts with the hyperparameters at the
data generation process. The rectangular node for f (x) corresponds to a deterministic operation representing
f (x) = ?f s(Wx). The panel in (c) shows how the latent dimensionality of the Puma dataset is inferred to be
4, roughly corresponding to input dimensions 4, 5, 15 and 16 (see Section 3.3).
Based on this function, we can compute the cross-covariance matrix Kf ,u and subsequently express
the conditional Gaussian density (often referred to as conditional GP prior):
T
?1
p(f |u, W) = N (f |Kf ,u K?1
u,u u, Kf ,f ? Kf ,u Ku,u Kf ,u ),
so that p(f |u, W)p(u) allows to obtain the initial conditional GP
R prior p(f |W), used in eq. (3), after
a marginalisation over the inducing variables, i.e. p(f |W) = p(f |u, W)p(u)du. We would like
now to apply variational inference in the augmented joint model1
p(y, f , u, W) = N (y|f , ? 2 I)p(f |u, W)p(u)p(W),
in order to approximate the intractable posterior distribution p(f , W, u|y). We introduce the variational distribution
q(f , W, u) = p(f |u, W)q(W)q(u),
(10)
where p(f |u, W) is the conditional GP prior that appears in the joint model, q(u) is a free-form
variational distribution that after optimisation is found to be Gaussian (see Section B.1 in the supplementary material), while q(W) is restricted to be the following factorised Gaussian:
q(W) =
K Y
D
Y
2
N (wkd |?dk , ?kd
),
(11)
k=1 d=1
The variational lower bound that minimises the Kullback Leibler (KL) divergence between the variational and the exact posterior distribution can be written in the form
F = F1 ? KL(q(W)||p(W)),
(12)
where the analytical form of F1 is given in Section B.1 of the supplementary material, whereas the
KL divergence term KL(q(W)||p(W)) that depends on the prior distribution over W is described
in the next section.
The variational lower bound is maximised using gradient-based methods over the variational pa2 K,D
rameters {?kd , ?kd
}k=1,d=1 , the inducing inputs Z (which are also variational parameters) and the
hyperparameters (?f , ? 2 ).
1
The scale parameter ?f and the noise variance ? 2 are not assigned prior distributions, but instead they are
treated by Type II ML. Notice that the treatment of (?f , ? 2 ) with a Bayesian manner is easier and approximate
inference could be done with the standard conjugate variational Bayesian framework (Bishop, 2006).
4
2.4
Prior over p(W) and analytical reduction of the number of optimisable parameters
To set the prior distribution for the parameters W, we follow the automatic relevance determination (ARD) idea introduced in (MacKay, 1994; Neal, 1998) and subsequently considered in several
models such as sparse linear models (Tipping, 2001) and variational Bayesian PCA (Bishop, 1999).
Specifically, the prior distribution takes the form
p(W) =
K Y
D
Y
N (wkd |0, `2k ),
(13)
k=1 d=1
where the elements of each row of W follow a zero-mean Gaussian distribution with a common
variance. Learning the set of variances {`2k }K
k=1 can allow to automatically select the dimensionality
associated with the Mahalanobis distance metric dW (x, x0 ). This could be carried out by either
applying a Type II ML estimation procedure or a variational Bayesian approach, where the latter assigns a conjugate Gamma prior on the variances and optimises a variational distribution q({`2k }K
k=1 )
over them. The optimisable quantities in both these procedures can be removed analytically and
optimally from the variational lower bound as described next.
Consider the case where we apply Type II ML for the variances {`2k }K
k=1 . These parameters appear
only in the KL(q(W)||p(W)) term (denoted by KL in the following) of the lower bound in eq. (12)
which can be written in the form:
"P
#
K
D
D
2
2
2
X
?
+
?
1X
?
dk
d=1 dk
KL =
?D?
log dk
.
2
`2k
`2k
k=1
d=1
By first minimizing this term with respect to these former hyperparameters we find that
PD
? 2 + ?2dk
, k = 1, . . . , K,
`2k = d=1 dk
D
and then by substituting back these optimal values into the KL divergence we obtain
"D
!
#
K
D
X
1X X
2
2
KL =
log ?dk
? D log
?dk
+ ?2dk + D log D ,
2
k=1
d=1
(14)
(15)
d=1
which now depends only on variational parameters. When we treat {`2k }K
k=1 in a Bayesian manner,
? ?2
?
???1
?
we assign inverse Gamma prior to each variance `2k , p(`2k ) = ?(?) `2k
e `k . Then, by following a similar procedure as the one above we can remove optimally the variational factor q({`2k }K
k=1 )
(see Section B.2 in the supplementary material) to obtain
!
X
K
D
K D
X
1 XX
D
2
2
2
+?
log 2? +
?kd + ?kd +
log(?kd
) + const,
(16)
KL = ?
2
2
k=1
d=1
k=1 d=1
which, as expected, has the nice property that when ? = ? = 0, so that the prior over variances
becomes improper, it reduces to the quantity in (15).
Finally, it is important to notice that different and particularly non-Gaussian priors for the parameters
W can be also accommodated by our variational method. More precisely, any alternative prior for
W changes only the form of the negative KL divergence term in the lower bound in eq. (12). This
term remains analytically tractable even for priors such as the Laplace or certain types of spike and
slab priors. In the experiments we have used the ARD prior described above while the investigation
of alternative priors is intended to be studied as a future work.
2.5 Predictions
Assume we have a test input x? and we would like to predict the corresponding output y? . The exact
predictive density p(y? |y) is intractable and therefore we approximate it with the density obtained
by averaging over the variational posterior distribution:
Z
q(y? |y) = N (y? |f? , ? 2 )p(f? |f , u, W)p(f |u, W)q(u)q(W)df? df dudW,
(17)
5
where p(f |u, W)q(u)q(W) is the variational distribution and p(f? |f , u, W) is the conditional GP
prior over the test value f? given the training function
R values f and the inducing variables u. By
performing first the integration over f , we obtain p(f? |f , u, W)p(f |u, W)df = p(f? |u, W)
which yields as a consequence of the consistency property of the Gaussian process prior. Given
that p(f? |u, W) and q(u) (see Section B.1 in the supplementary material) are Gaussian densities
with respect to f? and u, the above can be further simplified to
Z
q(y? |y) = N (y? |?? (W), ??2 (W) + ? 2 )q(W)dW,
where the mean ?? (W) and variance ??2 (W) obtain closed-form expressions and consist of nonlinear functions of W making the above integral intractable. However, by applying Monte Carlo
integration based on drawing independent samples from the Gaussian distribution q(W) we can
efficiently approximate the above according to
q(y? |y) =
T
1X
N (y? |?? (W(t) ), ??2 (W(t) ) + ? 2 ),
T t=1
(18)
which is the quantity used in our experiments. Furthermore, although the predictive density is not
Gaussian, its mean and variance can be computed analytically as explained in Section B.1 of the
supplementary material.
3
Experiments
In this section we will use standard data sets to assess the performance of the proposed VDMGP
in terms of normalised mean square error (NMSE) and negative log-probability density (NLPD).
We will use as benchmarks a full GP with automatic relevance determination (ARD) and the stateof-the-art SPGP-DR model, which is described below. Also, see Section A of the supplementary
material for an example of dimensionality reduction on a simple toy example.
3.1
Review of SPGP-DR
The sparse pseudo-input GP (SPGP) from Snelson and Ghahramani (2006a) is a well-known sparse
GP model, that allows the computational cost of GP regression to scale linearly with the number of
samples in a the dataset. This model is sometimes referred to as FITC (fully independent training
conditional) and uses an active set of m pseudo-inputs that control the speed vs. performance tradeoff of the method. SPGP is often used when dealing with datasets containing more than a few
thousand samples, since in those cases the cost of a full GP becomes impractical.
In Snelson and Ghahramani (2006b), a version of SPGP with dimensionality reduction (SPGP-DR)
is presented. SPGP-DR applies the SPGP model to a linear projection of the inputs. The K ? D
projection matrix W is learned so as to maximise the evidence of the model. This can be seen
simply as a specialisation of SPGP in which the covariance function is a squared exponential with a
Mahalanobis distance defined by W> W. The idea had already been applied to the standard GP in
(Vivarelli and Williams, 1998).
Despite the apparent similarities between SPGP-DR and VDMGP, there are important differences
worth clarifying. First, SPGP?s pseudo-inputs are model parameters and, as such, fitting a large
number of them can result in overfitting, whereas the inducing inputs used in VDMGP are variational parameters whose optimisation can only result in a better fit of the posterior densities. Second,
SPGP-DR does not place a prior on the linear projection matrix W; it is instead fitted using Maximum Likelihood, just as the pseudo-inputs. In contrast, VDMGP does place a prior on W and
variationally integrates it out.
These differences yield an important consequence: VDMGP can infer automatically the latent dimensionality K of data, but SPGP-DR is unable to, since increasing K is never going to decrease
its likelihood. Thus, VDMGP follows Occam?s razor on the number of latent dimensions K.
3.2
Temp and SO2 datasets
We will assess VDMGP on real-world datasets. For this purpose we will use the two data sets
from the WCCI-2006 Predictive Uncertainty in Environmental Modeling Competition run by Gavin
6
1.5
Full GP
VDMGP
SPGP?DR
0.55
0.5
Full GP
VDMGP
SPGP?DR
1.4
Full GP
VDMGP
SPGP?DR
1.4
1.2
0.45
1.3
1
0.4
1.2
0.35
0.8
0.3
1.1
0.6
0.25
1
0.2
0.4
0.15
0.9
0.2
0.1
0.05
0.8
100
200
500
1000
2000
5000 7117
100
200
500
1000
2000
5000
15304
100
200
500
1000
2000
5000 7168
(a) Temp, avg. NMSE ? 1 (b) SO2 , avg. NMSE ? 1 (c) Puma, avg. NMSE ? 1
std. dev. of avg.
std. dev. of avg.
std. dev. of avg.
2
6
Full GP
VDMGP
SPGP?DR
1.8
Full GP
VDMGP
SPGP?DR
5.8
1.6
Full GP
VDMGP
SPGP?DR
1.8
1.6
5.6
1.4
1.4
5.4
1.2
1.2
1
5.2
1
0.8
5
0.8
0.6
0.6
4.8
0.4
4.6
0.2
4.4
0.4
0.2
0
0
?0.2
100
200
500
1000
2000
5000 7117
100
200
500
1000
2000
5000
15304
100
200
500
(d) Temp, avg. NLPD ? one (e) SO2 , avg. NLPD ? one (f) Puma, avg.
std. dev. of avg.
std. dev. of avg.
std. dev. of avg.
1000
2000
5000 7168
NLPD ? one
Figure 2: Average NMSE and NLPD for several real datasets, showing the effect of different training set sizes.
Cawley2 , called Temp and SO2 . In dataset Temp, maximum daily temperature measurements have
to be predicted from 106 input variables representing large-scale circulation information. For the
SO2 dataset, the task is to predict the concentration of SO2 in an urban environment twenty-four
hours in advance, using information on current SO2 levels and meteorological conditions.3 These
are the same datasets on which SPGP-DR was originally tested (Snelson and Ghahramani, 2006b),
and it is worth mentioning that SPGP-DR?s only entry in the competition (for the Temp dataset) was
the winning one.
We ran SPGP-DR and VDMGP using the same exact initialisation for the projection matrix on
both algorithms and tested the effect of using a reduced number of training data. For SPGP-DR
we tested several possible latent dimensions K = {2, 5, 10, 15, 20, 30}, whereas for VDMGP we
fixed K = 20 and let the model infer the number of dimensions. The number of inducing variables
(pseudo-inputs for SPGP-DR) was set to 10 for Temp and 20 for SO2 . Varying sizes for the training
set between 100 and the total amount of available samples were considered. Twenty independent
realisations were performed.
Average NMSE as a function of training set size is shown in Figures 2(a) and 2(b). The multiple
dotted blue lines correspond to SPGP-DR with different choices of latent dimensionality K. The
dashed black line represents the full GP, which has been run for training sets up to size 2000. VDMGP is shown as a solid red line. Similarly, average NLPD is shown as a function of training set
size in Figures 2(d) and 2(e).
When feasible, the full GP performs best, but since it requires the inversion of the full kernel matrix,
it cannot by applied to large-scale problems such as the ones considered in this subsection. Also,
even in reasonably-sized problems, the full GP may run into trouble if several noise-only input
dimensions are present. SPGP-DR works well for large training set sizes, since there is enough
information for it to avoid overfitting and the advantage of using a prior on W is reduced. However,
2
Available at http://theoval.cmp.uea.ac.uk/?gcc/competition/
Temp: 106 dimensions 7117/3558 training/testing data, SO2 : 27 dimensions 15304/7652 training/testing data.
3
For SO2 , which contains only positive labels yn , a logarithmic transformation of the type log(a + yn ) was
applied, just as the authors of (Snelson and Ghahramani, 2006b) did. However, reported NMSE and NLPD
figures still correspond to the original labels.
7
for smaller training sets, performance is quite bad and the choice of K becomes very relevant (which
must be selected through cross-validation). Finally, VDMGP results in scalable performance: It is
able to perform dimensionality reduction and achieve high accuracy both on small and large datasets,
while still being faster than a full GP.
3.3 Puma dataset
In this section we consider the 32-input, moderate noise version of the Puma dataset.4 This is
realistic simulation of the dynamics of a Puma 560 robot arm. Labels represent angular accelerations
of one of the robot arm?s links, which have to be predicted based on the angular positions, velocities
and torques of the robot arm. 7168 samples are available for training and 1024 for testing.
It is well-known from previous works (Snelson and Ghahramani, 2006a) that only 4 out of the 32
input dimensions are relevant for the prediction task, and that identifying them is not always easy.
In particular, SPGP (the standard version, with no dimensionality reduction), fails at this task unless
initialised from a ?good guess? about the relevant dimensions coming from a different model, as
discussed in (Snelson and Ghahramani, 2006a). We thought it would be interesting to assess the
performance of the discussed models on this dataset, again considering different training set sizes,
which are generated by randomly sampling from the training set.
Results are shown in Figures 2(c) and 2(f). VDMGPR determines that there are 4 latent dimensions, as shown in Figure 1(c). The conclusions to be drawn here are similar to those of the previous
subsection: SPGP-DR has trouble with ?small? datasets (where the threshold for a dataset being considered small enough may vary among different datasets) and requires a parameter to be validated,
whereas VDMGPR seems to perform uniformly well.
3.4 A note on computational complexity
The computational complexity of VDMGP is O(N M 2 K +N DK), just as that of SPGP-DR, which
is much smaller than the O(N 3 +N 2 D) required by a full GP. However, since the computation of the
variational bound of VDMGP involves more steps than the computation of the evidence of SPGPDR, VDMGP is slower than SPGP-DR. In two typical cases using 500 and 5000 training points
full GP runs in 0.24 seconds (for 500 training points) and in 34 seconds (for 5000 training points),
VDMGP runs in 0.35 and 3.1 seconds while SPGP-DR runs in 0.01 and 0.10 seconds.
4
Discussion and further work
A typical approach to regression when the number of input dimensions is large is to first use a
linear projection of input data to reduce dimensionality (e.g., PCA) and then apply some regression technique. Instead of approaching this method in two steps, a monolithic approach allows the
dimensionality reduction to be tailored to the specific regression problem.
In this work we have shown that it is possible to variationally integrate out the linear projection
of the inputs of a GP, which, as a particular case, corresponds to integrating out its length-scale
hyperparameters. By placing a prior on the linear projection, we avoid overfitting problems that may
arise in other models, such as SPGP-DR. Only two parameters (noise variance and scale) are free in
this model, whereas the remaining parameters appearing in the bound are free variational parameters,
and optimizing them can only result in improved posterior estimates. This allows us to automatically
infer the number of latent dimensions that are needed for regression in a given problem, which is
also not possible using SPGP-DR. Finally, the size of the data sets that the proposed model can
handle is much wider than that of SPGP-DR, which performs badly on small-size data.
One interesting topic for future work is to investigate non-Gaussian sparse priors for the parameters
W. Furthermore, given that W represents length-scales it could be replaced by a random function
W(x), such a GP random function, which would render the length-scales input-dependent, making such a formulation useful in situations with varying smoothness across input space. Such a
smoothness-varying GP is also an interesting subject of further work.
Acknowledgments
MKT greatly acknowledges support from ?Research Funding at AUEB for Excellence and Extroversion, Action 1: 2012-2014?. MLG acknowledges support from Spanish CICYT TIN2011-24533.
4
Available from Delve, see http://www.cs.toronto.edu/?delve/data/pumadyn/desc.
html.
8
References
Bishop, C. M. (1999). Variational principal components. In In Proceedings Ninth International
Conference on Artificial Neural Networks, ICANN?99, pages 509?514.
Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1st ed. 2006 edition.
MacKay, D. J. (1994). Bayesian non-linear modelling for the energy prediction competition. SHRAE
Transactions, 4:448?472.
Murray, I. and Adams, R. P. (2010). Slice sampling covariance hyperparameters of latent Gaussian
models. In Lafferty, J., Williams, C. K. I., Zemel, R., Shawe-Taylor, J., and Culotta, A., editors,
Advances in Neural Information Processing Systems 23, pages 1723?1731.
Neal, R. M. (1998). Assessing relevance determination methods using delve. Neural Networksand
Machine Learning, pages 97?129.
Rasmussen, C. and Williams, C. (2006). Gaussian Processes for Machine Learning. Adaptive
Computation and Machine Learning. MIT Press.
Snelson, E. and Ghahramani, Z. (2006a). Sparse Gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems 18, pages 1259?1266. MIT Press.
Snelson, E. and Ghahramani, Z. (2006b). Variable noise and dimensionality reduction for sparse
Gaussian processes. In Uncertainty in Artificial Intelligence.
Tipping, M. E. (2001). Sparse bayesian learning and the relevance vector machine. Journal of
Machine Learning Research, 1:211?244.
Titsias, M. K. (2009). Variational learning of inducing variables in sparse Gaussian processes. In
Proc. of the 12th International Workshop on AI Stats.
Titsias, M. K. and Lawrence, N. D. (2010). Bayesian Gaussian process latent variable model. Journal of Machine Learning Research - Proceedings Track, 9:844?851.
Vivarelli, F. and Williams, C. K. I. (1998). Discovering hidden features with Gaussian processes
regression. In Advances in Neural Information Processing Systems, pages 613?619.
Weinberger, K. Q. and Saul, L. K. (2009). Distance metric learning for large margin nearest neighbor
classification. J. Mach. Learn. Res., 10:207?244.
Xing, E., Ng, A., Jordan, M., and Russell, S. (2003). Distance metric learning, with application to
clustering with side-information.
9
| 5088 |@word determinant:1 version:3 middle:1 inversion:2 seems:1 simulation:1 covariance:5 solid:1 reduction:8 initial:2 contains:1 initialisation:1 tuned:1 existing:3 current:1 wd:1 written:2 must:1 realistic:1 wx:12 remove:1 v:1 intelligence:1 selected:1 guess:1 discovering:1 maximised:1 provides:2 node:2 toronto:1 wkd:2 consists:3 fitting:1 inside:3 manner:2 introduce:3 excellence:1 x0:8 expected:2 roughly:1 torque:1 automatically:3 considering:3 increasing:1 becomes:3 spain:1 xx:1 panel:3 developed:2 transformation:2 impractical:1 pseudo:6 xd:2 exactly:1 demonstrates:1 rm:1 uk:1 control:1 appear:3 yn:2 positive:1 maximise:1 lvm:2 monolithic:1 treat:1 consequence:2 despite:1 troublesome:1 mach:1 optimised:1 path:1 approximately:2 black:1 k:5 studied:1 challenging:2 delve:3 mentioning:1 limited:1 practical:1 acknowledgment:1 testing:3 practice:1 procedure:6 thought:1 projection:7 puma:6 induce:1 integrating:1 cannot:1 applying:2 www:1 equivalent:1 deterministic:2 map:1 maximizing:1 williams:8 economics:1 graphically:1 rectangular:1 wcci:1 identifying:1 assigns:1 stats:1 dw:6 handle:1 laplace:1 construction:1 suppose:1 exact:3 gps:1 us:2 element:2 velocity:1 expensive:1 particularly:3 recognition:1 std:6 observed:1 thousand:1 culotta:1 improper:1 decrease:1 removed:1 russell:1 ran:1 pd:2 environment:1 ui:2 complexity:2 dynamic:1 spgp:34 depend:5 predictive:3 titsias:10 f2:5 joint:3 shortcoming:1 monte:1 artificial:2 zemel:1 apparent:1 whose:1 supplementary:6 valued:1 quite:1 say:1 pa2:1 drawing:1 statistic:2 gp:53 advantage:1 analytical:3 coming:1 remainder:1 zm:1 relevant:3 flexibility:1 achieve:1 inducing:8 competition:4 assessing:1 adam:2 wider:1 ac:1 minimises:1 miguel:2 nearest:1 ij:1 ard:5 eq:5 auxiliary:5 predicted:2 involves:1 implies:1 c:1 subsequently:2 material:6 explains:1 optimisable:5 assign:1 f1:2 investigation:1 standardised:13 desc:1 clarify:1 considered:6 gavin:1 uea:1 lawrence:7 predict:2 slab:1 substituting:1 vary:1 purpose:1 estimation:1 proc:1 integrates:2 athens:1 applicable:2 label:3 currently:1 mtitsias:1 mit:2 clearly:1 gaussian:27 always:1 rather:1 avoid:3 cmp:1 varying:3 validated:1 modelling:2 likelihood:7 greatly:1 contrast:2 inference:10 dependent:1 integrated:2 hidden:1 going:1 comprising:1 classification:2 among:1 html:1 denoted:1 priori:3 socalled:1 stateof:1 art:1 special:1 mackay:2 integration:2 marginal:1 equal:1 once:1 comprise:1 having:1 optimises:1 never:1 sampling:2 ng:1 represents:2 placing:1 unsupervised:1 future:2 realisation:1 few:3 randomly:1 gamma:2 divergence:4 comprehensive:1 replaced:1 intended:1 investigate:1 evaluation:1 integral:1 daily:1 unless:1 indexed:1 taylor:1 accommodated:1 re:1 theoretical:1 fitted:1 instance:1 modeling:1 dev:6 cost:2 entry:1 gr:1 optimally:2 reported:1 st:1 density:10 international:2 universidad:1 informatics:1 theoval:1 together:1 pumadyn:1 squared:6 augmentation:1 again:1 containing:1 dr:27 toy:1 de:1 factorised:1 depends:5 performed:1 closed:1 red:1 xing:3 bayes:1 carlos:1 ass:3 square:2 ni:1 accuracy:2 circulation:1 variance:10 efficiently:1 yield:2 correspond:2 bayesian:17 carlo:1 worth:2 ed:1 mlg:1 energy:1 initialised:1 associated:3 sampled:1 dataset:9 treatment:3 cicyt:1 subsection:2 dimensionality:12 amplitude:1 variationally:2 uc3m:1 back:1 appears:1 originally:1 tipping:2 supervised:2 follow:3 improved:1 formulation:2 done:1 furthermore:4 just:3 angular:2 hand:1 nonlinear:1 meteorological:1 pragmatically:1 defines:1 effect:2 concept:2 former:2 analytically:3 assigned:3 leibler:1 covary:1 neal:2 mahalanobis:7 during:1 spanish:1 razor:1 tsc:1 performs:2 temperature:1 ranging:1 variational:41 snelson:9 novel:5 recently:2 funding:1 common:3 discussed:2 measurement:1 ai:1 smoothness:2 rd:2 automatic:2 consistency:1 similarly:1 shawe:1 had:1 robot:3 similarity:1 posterior:5 recent:1 optimizing:1 moderate:1 store:1 certain:3 life:1 yi:5 inverted:1 seen:1 freely:3 novelty:1 dashed:1 signal:1 ii:3 full:20 multiple:1 reduces:2 infer:3 faster:1 determination:3 cross:3 prediction:4 variant:2 regression:19 aueb:2 scalable:1 essentially:1 metric:7 optimisation:2 df:3 kernel:30 represent:2 sometimes:1 tailored:1 whereas:6 marginalisation:1 induced:1 neutralised:1 subject:1 lafferty:1 jordan:1 iii:1 enough:2 easy:1 fit:1 zi:3 approaching:1 reduce:2 idea:2 tradeoff:1 expression:1 pca:2 suffer:1 render:1 so2:10 action:1 useful:2 amount:2 simplest:1 reduced:2 http:2 exist:1 zj:1 notice:3 dotted:1 estimated:1 track:1 reparametrisation:1 blue:1 express:2 four:1 nevertheless:1 threshold:1 drawn:2 urban:1 run:6 inverse:2 uncertainty:2 place:2 throughout:1 bound:7 badly:1 precisely:4 speed:1 performing:1 relatively:1 department:1 gredilla:1 according:6 kd:6 wxi:1 across:2 conjugate:2 smaller:2 unity:1 temp:8 making:3 modification:2 explained:1 restricted:2 advertised:1 taken:1 computationally:1 remains:1 discus:4 vivarelli:4 needed:2 tractable:1 end:1 available:4 operation:1 apply:3 observe:1 hierarchical:2 appearing:1 alternative:2 weinberger:3 slower:1 dpt:1 original:1 michalis:1 remaining:1 trouble:2 clustering:1 const:1 ghahramani:9 murray:2 wx0:2 already:1 quantity:4 spike:1 concentration:1 usual:1 diagonal:1 interacts:1 gradient:1 subspace:1 distance:8 unable:1 link:1 clarifying:1 topic:1 nlpd:7 length:9 providing:1 minimizing:1 negative:2 gcc:1 motivates:1 twenty:2 perform:2 datasets:11 benchmark:1 finite:2 situation:1 communication:1 ninth:1 arbitrary:1 inferred:1 introduced:2 required:1 kl:11 z1:1 learned:1 hour:1 able:1 below:1 pattern:1 model1:1 power:1 critical:1 business:1 treated:1 arm:3 representing:2 fitc:1 improve:1 concludes:1 carried:1 acknowledges:2 coupled:1 prior:29 nice:1 review:1 kf:15 fully:1 interesting:5 generation:1 organised:1 rameters:1 validation:1 foundation:1 integrate:2 editor:1 intractability:1 bypass:1 occam:1 row:1 rasmussen:3 free:4 side:1 allow:1 normalised:1 saul:3 neighbor:1 sparse:8 slice:1 overcome:1 dimension:18 evaluating:1 world:1 author:1 commonly:2 reinforcement:1 avg:12 simplified:1 adaptive:1 far:1 transaction:1 approximate:6 kullback:1 dealing:1 ml:3 global:3 overfitting:5 active:1 assumed:1 xi:4 latent:18 quantifies:1 additionally:1 ku:6 reasonably:1 robust:1 learn:1 interact:1 du:1 complex:1 did:1 icann:1 linearly:2 motivation:2 noise:6 hyperparameters:29 arise:1 edition:1 nmse:7 augmented:1 referred:5 madrid:1 fails:1 position:1 deterministically:1 exponential:6 winning:1 rk:5 z0:2 bad:1 bishop:4 specific:1 showing:1 dk:10 specialisation:1 evidence:2 intractable:5 consist:2 workshop:1 margin:1 easier:1 depicted:2 logarithmic:1 azaro:1 simply:2 expressed:1 scalar:2 applies:1 springer:1 corresponds:3 environmental:1 determines:1 conditional:7 sorted:2 sized:1 acceleration:1 feasible:1 change:2 mkt:1 typical:4 specifically:1 uniformly:1 wt:1 averaging:1 principal:1 called:1 total:1 e:1 experimental:1 attempted:1 rarely:1 select:1 support:2 latter:2 relevance:5 mcmc:1 tested:3 |
4,518 | 5,089 | It is all in the noise: Efficient multi-task Gaussian
process inference with structured residuals
Christoph Lippert
Microsoft Research
Los Angeles, USA
[email protected]
Barbara Rakitsch
Machine Learning and Computational Biology
Research Group
Max Planck Institutes T?ubingen, Germany
[email protected]
Karsten Borgwardt1,2
Machine Learning and Computational Biology
Research Group
Max Planck Institutes T?ubingen, Germany
[email protected]
Oliver Stegle2
European Molecular Biology Laboratory
European Bioinformatics Institute
Cambridge, UK
[email protected]
Abstract
Multi-task prediction methods are widely used to couple regressors or classification models by sharing information across related tasks. We propose a multi-task
Gaussian process approach for modeling both the relatedness between regressors
and the task correlations in the residuals, in order to more accurately identify true
sharing between regressors. The resulting Gaussian model has a covariance term
in form of a sum of Kronecker products, for which efficient parameter inference
and out of sample prediction are feasible. On both synthetic examples and applications to phenotype prediction in genetics, we find substantial benefits of modeling
structured noise compared to established alternatives.
1
Introduction
Multi-task Gaussian process (GP) models are widely used to couple related tasks or functions for
joint regression. This coupling is achieved by designing a structured covariance function, yielding
a prior on vector-valued functions. An important class of structured covariance functions can
be derived from a product of a kernel function c relating the tasks (task covariance) and a kernel
function r relation the samples (sample covariance)
cov(fn,t , fn0 ,t0 ) =
c(t, t0 ) ?
| {z }
r(n, n0 ) ,
| {z }
(1)
task covariance sample covariance
where fn,t are latent function values that induce the outputs yn,t by adding some Gaussian noise.
If the outputs yn,t are fully observed, with one training example per sample and task, the resulting
covariance matrix between the latent factors can be written as a Kronecker product between the
sample covariance matrix and the task covariance matrix (e.g. [1]). More complex multi-task
covariance structures can be derived from generalizations of this product structure, for example via
convolution of multiple features, e.g. [2]. In [3], a parameterized covariance over the tasks is used,
assuming that task-relevant features are observed. The authors of [4] couple the latent features over
the tasks exploiting a dependency in neural population activity over time.
1
2
Also at Zentrum f?ur Bioinformatik, Eberhard Karls Universit?at T?ubingen,T?ubingen, Germany
Both authors contributed equally to this work.
1
Work proposing this type of multi-task GP regression builds on Bonilla and Williams [1], who have
emphasized that the power of Kronecker covariance models for GP models (Eqn. (1)) is linked to
non-zero observation noise. In fact, in the limit of noise-free training observations, the coupling of
tasks for predictions is lost in the predictive model, reducing to ordinary GP regressors for each individual task. Most multi task GP models build on a simple independent noise model, an assumption
that is mainly routed in computational convenience. For example [5] show that this assumption renders the evaluation of the model likelihood and parameter gradients tractable, avoiding the explicit
evaluation of the Kronecker covariance.
In this paper, we account for residual noise structure by modeling the signal and the noise covariance matrix as two separate Kronecker products. The structured noise covariance is independent
of the inputs but instead allows to capture residual correlation between tasks due to latent causes;
moreover, the model is simple and extends the widely used product covariance structure. Conceptually related noise models have been proposed in animal breeding [6, 7]. In geostatistics [8], linear
coregionalization models have been introduced to allow for more complicated covariance structures:
the signal covariance matrix is modeled as a sum of Kronecker products and the noise covariance
as a single Kronecker product. In machine learning, the Gaussian process regression networks [9]
considers an adaptive mixture of GPs to model related tasks. The mixing coefficients are dependent
on the input signal and control the signal and noise correlation simultaneously.
The remainder of this paper is structured as follows. First, we show that unobserved regressors or
causal processes inevitably lead to correlated residual, motivating the need to account for structured
noise (Section 2). This extension of the multi task GP model allows for more accurate estimation
of the task-task relationships, thereby improving the performance for out-of-sample predictions. At
the same time, we show how an efficient inference scheme can be derived for this class of models.
The proposed implementation handles closed form marginal likelihoods and parameter gradients for
matrix-variate normal models with a covariance structure represented by the sum of two Kronecker
products. These operations can be implemented at marginal extra computational cost compared to
models that ignore residual task correlations (Section 3). In contrast to existing work extending
Gaussian process multi task models by defining more complex covariance structures [2, 9, 8], our
model utilizes the gradient of the marginal likelihood for parameter estimation and does not require
expected maximization, variational approximation or MCMC sampling. We apply the resulting
model in simulations and real settings, showing that correlated residuals are a concern in important
applications (Section 4).
2
Multi-task Gaussian processes with structured noise
Let Y ? RN ?T denote the N ? T output training matrix for N samples and T tasks. A column of
>
this matrix corresponds to a particular task t is denoted as yt , and vecY = y1> . . . yT> denotes
the vector obtained by vertical concatenation of all columns of Y. We indicate the dimensions of the
matrix as capital subscripts when needed for clarity. A more thoughtful derivation of all equations
can be found in the Supplementary Material.
Multivariate linear model equivalence The multi-task Gaussian process regression model with
structured noise can be derived from the perspective of a linear multivariate generative model. For
a particular task t, the outputs are determined by a linear function of the training inputs across F
features S = {s1 , . . . , sF },
yt =
F
X
sf wf,t + ? t .
(2)
f =1
Multi-task sharing is achieved by specifying a multivariate normal prior across tasks, both for the
regression weights wf,t and the noise variances ? t :
p(W> ) =
F
Y
p(?> ) =
N (wf | 0, CT T )
N
Y
n=1
f =1
2
N (? n | 0, ?T T ) .
Marginalizing out the weights W and the residuals ? results in a matrix-variate normal model with
sum of Kronecker products covariance structure
?
?
?
?
p(vecY | C, R, ?) = N ?vecYN T | 0, CT T ? RN N + ?T T ? IN N ? ,
|
{z
} |
{z
}
signal covariance
(3)
noise covariance
where RN N = SS> is the sample covariance matrix that results from the marginalization over
the weights W in Eqn. (2). In the following, we will refer to a Gaussian process model with this
type of sum of Kronecker products covariance structure as GP-kronsum1 . As common to any kernel
method, the linear covariance R can be replaced with any positive semi-definite covariance function.
Predictive distribution In a GP-kronsum model, predictions for unseen test instances can be carried out by using the standard Gaussian process framework [10]:
p(vecY? |R? , Y) = N (vecY? | vec M? , V? ) .
?
(4)
?
Here, M denotes the mean prediction and V is the predictive covariance. Analytical expression
for both can be obtained by considering the joint distribution of observed and unobserved outputs
and completing the square, yielding:
vec M? = (CT T ? R?N ? N ) (CT T ? RN N + ?T T ? IN N )
?
V = (CT T ?
R?N ? N ? )
? (CT T ?
R?N ? N ) (CT T
?1
vecYN T ,
?1
? RN N + ?T T ? IN N )
(CT T ? R?N N ? ) ,
where R?N ? N is the covariance matrix between the test and training instances, and R?N ? N ? is the
covariance matrix between the test samples.
Design of multi-task covariance function In practice, neither the form of C nor the form of ? is
known a priori and hence needs to be inferred from data, fitting a set of corresponding covariance
parameters ?C and ?? . If the number of tasks T is large, learning a free-form covariance matrix is
prone to overfitting, as the number of free parameters grows quadratically with T . In the experiPK
2
ments, we consider a rank-k approximation of the form k=1 xk x>
k + ? I for the task matrices.
Task cancellation when the task covariance matrices are equal A notable form of the predictive
distribution (4) arises for the special case C = ?, that is the task covariance matrix of signal
and noise are identical. Similar to previous results for noise-free observations [1], maximizing the
marginal likelihood p(vecY|C, R, ?) with respect to the parameters ?R becomes independent of C
and the predictions are decoupled across tasks, i.e. the benefits from joint modeling are lost:
vec M? = vec R?N ? N (RN N + IN N )?1 YN T
(5)
In this case, the predictions depend on the sample covariance, but not on the task covariance. Thus,
the GP-kronsum model is most useful when the task covariances on observed features and on noise
reflect two independent sharing structures.
3
Efficient Inference
In general, efficient inference can be carried out for Gaussian models with a sum covariance of two
arbitrary Kronecker products
p(vecY | C, R, ?) = N (vecY | 0, CT T ? RN N + ?T T ? ?N N ) .
(6)
The key idea is to first consider a suitable data transformation that leads to a diagonalization of all
covariance matrices and second to exploit Kronecker tricks whenever possible.
Let ? = U? S? U>
? be the eigenvalue decomposition of ?, and analogously for ?. Borrowing
ideas from [11], we can first bring the covariance matrix in a more amenable form by factoring out
the structured noise:
1
the covariance is defined as the sum of two Kronecker products and not as the classical Kronecker sum
C ? R = C ? I + I ? R.
3
K=C?R+???
1
1
1
1
? ?R
? + I ? I S 2 U> ? S 2 U > ,
= U? S 2 ? U? S 2
C
?
1
?
1
?
1
?
?
?
(7)
1
? = S? 2 U> CU? S? 2 and R
? = S? 2 U> RU? S? 2 . In the following, we use definition
where C
?
?
?
?
?
?
? =C
? ?R
? + I ? I for this transformed covariance.
K
Efficient log likelihood evaluation. The log model likelihood (Eqn. (6)) can be expressed in terms
?
of the transformed covariance K:
NT
1
1
L=?
ln(2?) ? ln|K| ? vecY> K?1 vecY
2
2
2
NT
1
? ? 1 |S? ? S? | ? 1 vecY
? >K
? ?1 vecY,
?
=?
ln(2?) ? ln|K|
(8)
2
2
2
2
1
1
1
1
? = S? 2 U> ? S? 2 U> vecY = vec S? 2 UT YU? S? 2 is the projected output.
where vecY
?
?
?
?
?
?
?
Except for the additional term |S? ? S? |, resulting from the transformation, the log likelihood has
the exactly same form as for multi-task GP regression with iid noise [1, 5]. Using an analogous
derivation, we can now efficiently evaluate the log likelihood:
1
N
T
NT
ln(2?) ? ln|SC
ln|S? | ? |S? |
? ? SR
? + I ? I| ?
2
2
2
2
>
1
?1
> ?
? ?
,
? vec U>
(S
?
S
+
I
?
I)
vec
U
YU
YU
?
?
?
?
?
C
R
C
C
R
R
2
? as U ? S ? U> and similar for R.
?
where we have defined the eigenvalue decomposition of C
L=?
C C
(9)
?
C
Efficient gradient evaluation The derivative of the log marginal likelihood with respect to a covariance parameter ?R can be expressed as:
?
? ? ?1
1 ?
? ? 1 vecY
?>
?
L=?
ln |K|
K
vec(Y)
??R
2 ??R
2
??R
>
1
? ?
?1
>
= ? diag (SC
?
S
+
I
?
I)
diag
S
?
U
R
U
?
?
?
?
?
R
C
R
R
2
??R
1
? ?
? > vec U>?
? ? ,
+ vec(Y)
R UR
(10)
? YSC
R
2
??R
? = (S ? ? S ? + I ? I)?1 vec U> YU
? ? . Analogous gradients can be derived for
where vec(Y)
?
C
R
C
R
the task covariance parameters ?C and ?? . The proposed speed-ups also apply to the special cases
where ? is modeled as being diagonal as in [1], or for optimizing the parameters of a kernel function.
Since the sum of Kronecker products generally can not be written as a single Kronecker product, the
speed-ups cannot be generalized to larger sums of Kronecker products.
Efficient prediction Similarly, the mean predictor (Eqn. (4)) can be efficiently evaluated
h
1
i
?1
?
>
? >?
vec M? = vec R? U? S? 2
UR
S? 2 U>
C
.
? YUC
?
(11)
Gradient-based parameter inference The closed-form expression of the marginal likelihood
(Eqn. (9)) and gradients with respect to covariance parameters (Eqn. (10)) allow for use of gradientbased parameter inference. In the experiments, we employ a variant of L-BFGS-B [12].
Computational cost. While the naive approach has a runtime of O(N 3 ? T 3 ) and memory requirement of O(N 2 ? T 2 ), as it explicitly computes and inverts the Kronecker products, our reformulation
reduces the runtime to O(N 3 + T 3 ) and the memory requirement to O(N 2 + T 2 ), making it applicable to large numbers of samples and tasks. The empirical runtime savings over the naive approach
are explored in Section 4.1.
4
Figure 1: Runtime comparison on synthetic data. We compare our efficient GPkronsum implementation (left) versus its
naive counterpart (right). Shown is the runtime in seconds on a logarithmic scale as a
function of the sample size and the number
of tasks. The optimization was stopped prematurely if it did not complete after 104 seconds.
(a) Efficient Implementation (b) Naive Implementation
4
Experiments
We investigated the performance of the proposed GP-kronsum model in both simulated datasets and
response prediction problems in statistical genetics. To investigate the benefits of structured residual
covariances, we compared the GP-kronsum model to a Gaussian process (GP-kronprod) with iid
noise [5] as well as independent modeling of tasks using a standard Gaussian process (GP-single),
and joint modeling of all tasks using a standard Gaussian on a pooled dataset, naively merging data
from all tasks (GP-pool).
The predictive performance of individual models was assessed through 10-fold cross-validation.
For each fold, model parameters were fit on the training data only. To avoid local optima during
training, parameter fitting was carried out using five random restarts of the parameters on 90% of
the training instances. The remaining 10% of the training instances were used for out of sample
selection using the maximum log likelihood as criterion. Unless stated otherwise, in the multi-task
models the relationship between tasks was parameterized as xx> + ? 2 I, the sum of a rank-1 matrix
and a constant diagonal component. Both parameters, x and ? 2 , were learnt by optimizing the
marginal likelihood. Finally, we measured the predictive performance of the different methods via
the averaged square of Pearson?s correlation coefficient r2 between the true and the predicted output,
averaged over tasks. The squared correlation coefficient is commonly used in statistical genetics to
evaluate the performance of different predictors [13].
4.1
Simulations
First, we considered simulated experiments to explore the runtime behavior and to find out if there
are settings in which GP-kronsum performs better than existing methods.
Runtime evaluation. As a first experiment, we examined the runtime behavior of our method as
a function of the number of samples and of the number of tasks. Both parameters were varied in
the range {16, 32, 64, 128, 256}. The simulated dataset was drawn from the GP-kronsum model
(Eqn. (3)) using a linear kernel for the sample covariance matrix R and rank-1 matrices for the task
covariances C and ?. The runtime of this model was assessed for a single likelihood optimization on
an AMD Opteron Processor 6,378 using a single core (2.4GHz, 2,048 KB Cache, 512 GB Memory)
and compared to a naive implementation. The optimization was stopped prematurely if it did not
converge within 104 seconds.
In the experiments, we used a standard linear kernel on the features of the samples as sample covariance while learning the task covariances. This modeling choice results in a steeper runtime increase
with the number of tasks, due to the increasing number of model parameters to be estimated. Figure 1 demonstrates the significant speed-up. While our algorithm can handle 256 samples/256 tasks
with ease, the naive implementation failed to process more than 32 samples/32 tasks.
Unobserved causal process induces structured noise A common source of structured residuals
are unobserved causal processes that are not captured via the inputs. To explore this setting, we
generated simulated outputs from a sum of two different processes. For one of the processes, we
assumed that the causal features Xobs were observed, whereas for the second process the causal
features Xhidden were hidden and independent of the observed measurements. Both processes were
simulated to have a linear effect on the output. The effect from the observed features was again
divided up into an independent effect, which is task-specific, and a common effect, which, up to
5
rescaling rcommon , is shared over all tasks:
Ycommon = Xobs Wcommon , Wcommon = rcommon ? wcommon , rcommon ? N (0, I), wcommon ? N (0, I)
The trade-off parameter ?common determines the extent of relatedness between tasks:
Yobs = ?common Ycommon + (1 ? ?common )Yind .
The effect of the hidden features was simulated analogously. A second trade-off parameter ?hidden
was introduced, controlling the ratio between the observed and hidden effect:
Y = ?signal [(1 ? ?hidden )Yobs + ?hidden Yhidden ] + (1 ? ?signal )Ynoise ,
where Ynoise is Gaussian observation noise, and ?signal is a third trade-off parameter defining the
ratio between noise and signal.
To investigate the impact of the different trade-off parameters, we considered a series of
datasets varying one of the parameters while keeping others fixed. We varied ?signal in the
range {0.1, 0.3, 0.5, 0.7, 0.9, 1.0}, ?common ? {0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0} and ?hidden ?
{0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0}, with default values marked in bold. Note that the best possible
explained variance for the default setting is 45%, as the causal signal is split up equally between
the observed and the hidden process. For all simulation experiments, we created datasets with 200
samples and 10 tasks. The number of observed features was set to 200, as well as the number of
hidden features. For each such simulation setting, we created 30 datasets.
First, we considered the impact of variation in signal strength ?signal (Figure 2a), where the overall
signal was divided up equally between the observed and hidden signal. Both GP-single and GPkronsum performed better as the overall signal strength increased. The performance of GP-kronsum
was superior, as the model can exploit the relatedness between the different tasks.
Second, we explored the ability of the different methods to cope with an underlying hidden process (Figure 2b). In the absence of a hidden process (?hidden = 0), GP-kronprod and GP-kronsum
had very similar performances, as both methods leverage the shared signal of the observed process, thereby outperforming the single-task GPs. However, as the magnitude of the hidden signal
increases, GP-kronprod falsely explains the task correlation completely by the covariance term representing the observed process which leads to loss of predictive power.
Last, we examined the ability of different methods to exploit the relatedness between the tasks (Figure 2c). Since GP-single assumed independent tasks, the model performed very similarly across
the full range of common signal. GP-kronprod suffered from the same limitations as previously described, because the correlation between tasks in the hidden process increases synchronously with
the correlation in the observed process as ?common increases. In contrast, GP-kronsum could take
advantage of the shared component between the tasks, as knowledge is transferred between them.
GP-pool was consistently outperformed by all competitors as two of its main assumptions are heavily violated: samples of different tasks do not share the same signal and the residuals are neither
independent of each other, nor do they have the same noise level.
In summary, the proposed model is robust to a range of different settings and clearly outperforms its
competitors when the tasks are related to each other and not all causal processes are observed.
4.2
Applications to phenotype prediction
As a real world application we considered phenotype prediction in statistical genetics. The aim of
these experiments was to demonstrate the relevance of unobserved causes in real world prediction
problems and hence warrant greater attention.
Gene expression prediction in yeast We considered gene expression levels from a yeast genetics study [14]. The dataset comprised of gene expression levels of 5, 493 genes and 2, 956 SNPs
(features), measured for 109 yeast crosses. Expression levels for each cross were measured in two
conditions (glucose and ethanol as carbon source), yielding a total of 218 samples. In this experiment, we treated the condition information as a hidden factor instead of regressing it out, which is
analogous to the hidden process in the simulation experiments. The goal of this experiment was to
investigate how alternative methods can deal and correct for this hidden covariate. We normalized
all features and all tasks to zero mean and unit variance. Subsequently, we filtered out all genes
that were not consistently expressed in at least 90% of the samples (z-score cutoff 1.5). We also
6
(a) Total Signal
(b) Hidden Signal
(c) Shared Signal
Figure 2: Evaluation of alternative methods for different simulation settings. From left to right:
(a) Evaluation for varying signal strength. (b) Evaluation for variable impact of the hidden signal.
(c) Evaluation for different strength of relatedness between the tasks. In each simulation setting, all
other parameters were kept constant at default parameters marked with the yellow star symbol.
(a) Empirical
(b) Signal
(c) Noise
Figure 3: Fitted task covariance matrices for gene expression levels in yeast. From left to right:
(a) Empirical covariance matrix of the gene expression levels. (b) Signal covariance matrix learnt
by GP-kronsum. (c) Noise covariance matrix learnt by GP-kronsum. The ordering of the tasks was
determined using hierarchical clustering on the empirical covariance matrix.
discarded genes with low signal (< 10% of the variance) or were close to noise free (> 90% of the
variance), reducing the number of genes to 123, which we considered as tasks in our experiment.
The signal strength was estimated by a univariate GP model. We used a linear kernel calculated on
the SNP features for the sample covariance.
Figure 3 shows the empirical covariance and the learnt task covariances by GP-kronsum. Both learnt
covariances are highly structured, demonstrating that the assumption of iid noise in the GP-kronprod
model is violated in this dataset. While the signal task covariance matrix reflects genetic signals that
are shared between the gene expression levels, the noise covariance matrix mainly captures the
mean shift between the two conditions the gene expression levels were measured in (Figure 4). To
investigate the robustness of the reconstructed latent factor, we repeated the training 10 times. The
mean latent factors and its standard errors were 0.2103 ? 0.0088 (averaged over factors, over the 10
best runs selected by out-of-sample likelihood), demonstrating robustness of the inference.
When considering alternative methods for out of sample prediction, the proposed Kronecker Sum
model (r2 (GP-kronsum)=0.3322 ? 0.0014) performed significantly better than previous approaches
(r2 (GP-pool)=0.0673 ? 0.0004, r2 (GP-single)=0.2594 ? 0.0011, r2 (GP-kronprod)=0.1820 ?
0.0020). The results are averages over 10 runs and ? denotes the corresponding standard errors.
Multi-phenotype prediction in Arabidopsis thaliana. As a second dataset, we considered a
genome-wide association study in Arabidopsis thaliana [15] to assess the prediction of developmental phenotypes from genomic data. This dataset consisted of 147 samples and 216,130 single
nucleotide polymorphisms (SNPs, here used as features). As different tasks, we considered the phenotypes flowering period duration, life cycle period, maturation period and reproduction period.
To avoid outliers and issues due to non-Gaussianity, we preprocessed the phenotypic data by first
converting it to ranks and squashing the ranks through the inverse cumulative Gaussian distribution.
The SNPs in Arabidopsis thaliana are binary and we discarded features with a frequency of less
7
XC
XSigma
Figure 4: Correlation between the mean
difference of the two conditions and the
latent factors on the yeast dataset. Shown
is the strength of the latent factor of the signal (left) and the noise (right) task covariance matrix as a function of the mean difference between the two environmental conditions. Each dot corresponds to one gene
expression level.
Corr(Glucose,Ethanol)
(a) Signal
Corr(Glucose,Ethanol)
(b) Noise
than 10% in all samples, resulting in 176,436 SNPs. Subsequently, we normalized the features to
zero mean and unit variance. Again, we used a linear kernel on the SNPs as sample covariance.
Since the causal processes in Arabidopsis thaliana are complex, we allowed the rank of the signal
and noise matrix to vary between 1 and 3. The appropriate rank complexity was selected on the 10%
hold out data of the training fold. We considered the average squared correlation coefficient on the
holdout fraction of the training data to select the model for prediction on the test dataset. Notably,
for GP-kronprod, the selected task complexity was rank(C) = 3, whereas GP-kronsum selected
a simpler structure for the signal task covariance (rank(C) = 1) and chose a more complex noise
covariance, rank(?) = 2.
The cross validation prediction performance of each model is shown in Table 1. For reproduction
period, GP-single is outperformed by all other methods. For the phenotype life cycle period, the
noise estimates of the univariate GP model were close to zero, and hence all methods, except of
GP-pool, performed equally well since the measurements of the other phenotypes do not provide
additional information. For maturation period, GP-kronsum and GP-kronprod showed improved
performance compared to GP-single and GP-pool. For flowering period duration, GP-kronsum
outperformed its competitors.
GP-pool
GP-single
GP-kronprod
GP-kronsum
Flowering period
duration
0.0502 ? 0.0025
0.0385 ? 0.0017
0.0846 ? 0.0021
0.1127 ? 0.0049
Life cycle
period
0.1038 ? 0.0034
0.3500 ? 0.0069
0.3417 ? 0.0062
0.3485 ? 0.0068
Maturation
period
0.0460 ? 0.0024
0.1612 ? 0.0027
0.1878 ? 0.0042
0.1918 ? 0.0041
Reproduction
period
0.0478 ? 0.0013
0.0272 ? 0.0024
0.0492 ? 0.0032
0.0501 ? 0.0033
Table 1: Predictive performance of the different methods on the Arabidopsis thaliana dataset.
Shown is the squared correlation coefficient and its standard error (measured by repeating 10-fold
cross-validation 10 times).
5
Discussion and conclusions
Multi-task Gaussian process models are a widely used tool in many application domains, ranging
from the prediction of user preferences in collaborative filtering to the prediction of phenotypes in
computational biology. Many of these prediction tasks are complex and important causal features
may remain unobserved or are not modeled. Nevertheless, most approaches in common usage assume that the observation noise is independent between tasks. We here propose the GP-kronsum
model, which allows to efficiently model data where the noise is dependent between tasks by building on a sum of Kronecker products covariance. In applications to statistical genetics, we have
demonstrated (1) the advantages of the dependent noise model over an independent noise model, as
well as (2) the feasibility of applying larger data sets by the efficient learning algorithm.
Acknowledgement
We thank Francesco Paolo Casale for helpful discussions. OS was supported by an Marie Curie
FP7 fellowship. KB was supported by the Alfried Krupp Prize for Young University Teachers of the
Alfried Krupp von Bohlen und Halbach-Stiftung.
8
References
[1] Edwin V. Bonilla, Kian Ming Adam Chai, and Christopher K. I. Williams. Multi-task gaussian
process prediction. In NIPS, 2007.
?
[2] Mauricio A. Alvarez
and Neil D. Lawrence. Sparse convolved gaussian processes for multioutput regression. In NIPS, pages 57?64, 2008.
[3] Edwin V. Bonilla, Felix V. Agakov, and Christopher K. I. Williams. Kernel multi-task learning
using task-specific features. In AISTATS, 2007.
[4] Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, and
Maneesh Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of
neural population activity. In NIPS, pages 1881?1888, 2008.
[5] Oliver Stegle, Christoph Lippert, Joris M. Mooij, Neil D. Lawrence, and Karsten M. Borgwardt. Efficient inference in matrix-variate gaussian models with iid observation noise. In
NIPS, pages 630?638, 2011.
[6] Karin Meyer. Estimating variances and covariances for multivariate animal models by restricted maximum likelihood. Genetics Selection Evolution, 23(1):67?83, 1991.
[7] V Ducrocq and H Chapuis. Generalizing the use of the canonical transformation for the solution of multivariate mixed model equations. Genetics Selection Evolution, 29(2):205?224,
1997.
[8] Hao Zhang. Maximum-likelihood estimation for multivariate spatial linear coregionalization
models. Environmetrics, 18(2):125?139, 2007.
[9] Andrew Gordon Wilson, David A. Knowles, and Zoubin Ghahramani. Gaussian process regression networks. In ICML, 2012.
[10] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine
Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005.
[11] Alfredo A. Kalaitzis and Neil D. Lawrence. Residual components analysis. In ICML, 2012.
[12] Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. Algorithm 778: L-bfgs-b:
Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw.,
23(4):550?560, December 1997.
[13] Ulrike Ober, Julien F. Ayroles, Eric A. Stone, Stephen Richards, and et al. Using WholeGenome Sequence Data to Predict Quantitative Trait Phenotypes in Drosophila melanogaster.
PLoS Genetics, 8(5):e1002685+, May 2012.
[14] Erin N Smith and Leonid Kruglyak. Gene?environment interaction in yeast gene expression.
PLoS Biology, 6(4):e83, 2008.
[15] S. Atwell, Y. S. Huang, B. J. Vilhjalmsson, Willems, and et al. Genome-wide association study
of 107 phenotypes in Arabidopsis thaliana inbred lines. Nature, 465(7298):627?631, Jun 2010.
9
| 5089 |@word trial:1 cu:1 simulation:7 covariance:72 decomposition:2 yuc:1 thereby:2 series:1 score:1 genetic:1 outperforms:1 existing:2 com:1 nt:3 written:2 john:1 fn:2 multioutput:1 n0:1 generative:1 selected:4 xk:1 prize:1 core:1 smith:1 filtered:1 math:1 preference:1 simpler:1 zhang:1 five:1 ethanol:3 fitting:2 falsely:1 notably:1 expected:1 karsten:3 behavior:2 mpg:2 nor:2 multi:19 ming:1 byrd:1 xobs:2 cache:1 considering:2 increasing:1 becomes:1 xx:1 moreover:1 underlying:1 estimating:1 proposing:1 unobserved:6 transformation:3 quantitative:1 runtime:10 exactly:1 universit:1 demonstrates:1 uk:2 control:1 unit:2 arabidopsis:6 yn:3 planck:2 mauricio:1 shenoy:1 positive:1 felix:1 local:1 limit:1 subscript:1 chose:1 examined:2 equivalence:1 specifying:1 christoph:2 ease:1 range:4 averaged:3 lost:2 practice:1 definite:1 empirical:5 maneesh:1 significantly:1 ups:2 induce:1 zoubin:1 convenience:1 cannot:1 selection:3 close:2 applying:1 demonstrated:1 yt:3 maximizing:1 williams:4 attention:1 duration:3 population:2 handle:2 variation:1 analogous:3 controlling:1 heavily:1 user:1 gps:2 carl:1 designing:1 trick:1 agakov:1 richards:1 observed:15 capture:2 cycle:3 ordering:1 trade:4 plo:2 substantial:1 developmental:1 und:1 complexity:2 environment:1 depend:1 predictive:8 eric:1 completely:1 xhidden:1 edwin:2 joint:4 represented:1 derivation:2 sc:2 pearson:1 widely:4 valued:1 supplementary:1 larger:2 s:1 otherwise:1 ability:2 cov:1 neil:3 unseen:1 gp:50 advantage:2 eigenvalue:2 sequence:1 analytical:1 propose:2 interaction:1 product:18 remainder:1 relevant:1 mixing:1 inbred:1 ebi:1 los:1 exploiting:1 chai:1 requirement:2 extending:1 optimum:1 adam:1 coupling:2 andrew:1 ac:1 measured:5 stiftung:1 edward:1 implemented:1 predicted:1 indicate:1 karin:1 correct:1 opteron:1 kb:2 subsequently:2 material:1 explains:1 require:1 polymorphism:1 generalization:1 drosophila:1 extension:1 hold:1 gradientbased:1 considered:9 normal:3 lawrence:3 predict:1 vary:1 estimation:3 outperformed:3 applicable:1 tool:1 reflects:1 mit:1 clearly:1 genomic:1 gaussian:24 gopal:1 aim:1 avoid:2 varying:2 wilson:1 derived:5 consistently:2 rank:10 likelihood:16 mainly:2 contrast:2 wf:3 helpful:1 inference:9 dependent:3 factoring:1 cunningham:1 hidden:20 borrowing:1 relation:1 transformed:2 subroutine:1 germany:3 overall:2 classification:1 issue:1 denoted:1 priori:1 animal:2 spatial:1 special:2 constrained:1 marginal:7 equal:1 saving:1 sampling:1 softw:1 biology:5 identical:1 yu:5 icml:2 warrant:1 others:1 gordon:1 richard:1 employ:1 simultaneously:1 zentrum:1 individual:2 replaced:1 microsoft:2 investigate:4 highly:1 evaluation:9 regressing:1 mixture:1 yielding:3 amenable:1 accurate:1 oliver:3 nucleotide:1 decoupled:1 unless:1 causal:9 stopped:2 fitted:1 instance:4 column:2 modeling:7 increased:1 maximization:1 ordinary:1 cost:2 predictor:2 comprised:1 motivating:1 dependency:1 teacher:1 learnt:5 synthetic:2 borgwardt:2 eberhard:1 off:4 pool:6 analogously:2 squared:3 reflect:1 again:2 von:1 huang:1 derivative:1 rescaling:1 account:2 de:2 bfgs:2 kruglyak:1 star:1 pooled:1 bold:1 gaussianity:1 coefficient:5 erin:1 notable:1 bonilla:3 explicitly:1 performed:4 closed:2 linked:1 steeper:1 ulrike:1 complicated:1 curie:1 collaborative:1 ass:1 square:2 fn0:1 variance:7 who:1 efficiently:3 identify:1 yellow:1 conceptually:1 accurately:1 iid:4 lu:1 processor:1 sharing:4 whenever:1 definition:1 competitor:3 yob:2 frequency:1 couple:3 dataset:9 holdout:1 knowledge:1 ut:1 restarts:1 maturation:3 response:1 improved:1 alvarez:1 evaluated:1 correlation:12 eqn:7 christopher:3 o:1 yeast:6 grows:1 building:1 usa:1 effect:6 normalized:2 true:2 consisted:1 counterpart:1 usage:1 hence:3 krupp:2 evolution:2 laboratory:1 alfried:2 deal:1 during:1 criterion:1 generalized:1 stone:1 complete:1 demonstrate:1 alfredo:1 performs:1 bring:1 snp:6 karls:1 ranging:1 variational:1 common:10 superior:1 association:2 peihuang:1 relating:1 trait:1 refer:1 significant:1 measurement:2 cambridge:1 vec:14 glucose:3 e83:1 similarly:2 cancellation:1 had:1 dot:1 multivariate:6 showed:1 perspective:1 optimizing:2 barbara:1 ubingen:4 outperforming:1 binary:1 jorge:1 life:3 captured:1 krishna:1 additional:2 greater:1 converting:1 converge:1 period:12 signal:36 semi:1 stephen:2 multiple:1 full:1 reduces:1 halbach:1 cross:5 divided:2 molecular:1 equally:4 feasibility:1 impact:3 prediction:24 variant:1 regression:8 kernel:9 achieved:2 whereas:2 fellowship:1 source:2 suffered:1 extra:1 sr:1 byron:1 december:1 leverage:1 split:1 marginalization:1 variate:3 fit:1 idea:2 angeles:1 shift:1 t0:2 expression:12 gb:1 routed:1 render:1 cause:2 useful:1 generally:1 repeating:1 induces:1 kian:1 canonical:1 estimated:2 per:1 paolo:1 santhanam:1 group:2 key:1 reformulation:1 demonstrating:2 nevertheless:1 drawn:1 capital:1 clarity:1 preprocessed:1 neither:2 cutoff:1 phenotypic:1 marie:1 kept:1 nocedal:1 fraction:1 sum:14 run:2 inverse:1 parameterized:2 extends:1 knowles:1 utilizes:1 environmetrics:1 thaliana:6 bound:1 ct:9 completing:1 kalaitzis:1 fold:4 bohlen:1 activity:2 strength:6 kronecker:20 speed:3 breeding:1 flowering:3 transferred:1 structured:14 across:5 remain:1 ur:3 making:1 s1:1 explained:1 outlier:1 restricted:1 ln:8 equation:2 previously:1 needed:1 fortran:1 tractable:1 fp7:1 operation:1 apply:2 hierarchical:1 appropriate:1 alternative:4 robustness:2 convolved:1 denotes:3 remaining:1 clustering:1 xc:1 joris:1 exploit:3 ghahramani:1 build:2 classical:1 lippert:3 diagonal:2 gradient:7 separate:1 thank:1 simulated:6 concatenation:1 amd:1 considers:1 tuebingen:2 extent:1 assuming:1 ru:1 ober:1 modeled:3 relationship:2 ratio:2 thoughtful:1 carbon:1 hao:1 stated:1 implementation:6 design:1 contributed:1 vertical:1 convolution:1 observation:6 datasets:4 discarded:2 francesco:1 willems:1 inevitably:1 defining:2 y1:1 rn:7 prematurely:2 varied:2 synchronously:1 arbitrary:1 inferred:1 introduced:2 david:1 quadratically:1 ryu:1 established:1 geostatistics:1 nip:4 trans:1 stegle:2 atwell:1 max:2 memory:3 power:2 suitable:1 treated:1 residual:12 ciyou:1 zhu:1 representing:1 scheme:1 julien:1 created:2 carried:3 jun:1 naive:6 sahani:1 prior:2 acknowledgement:1 mooij:1 marginalizing:1 ynoise:2 fully:1 loss:1 mixed:1 limitation:1 filtering:1 versus:1 validation:3 share:1 squashing:1 prone:1 genetics:9 summary:1 supported:2 last:1 free:5 keeping:1 rasmussen:1 allow:2 institute:3 wide:2 sparse:1 benefit:3 ghz:1 dimension:1 default:3 world:2 calculated:1 genome:2 computes:1 cumulative:1 author:2 commonly:1 coregionalization:2 regressors:5 adaptive:2 projected:1 cope:1 reconstructed:1 melanogaster:1 ignore:1 relatedness:5 gene:14 overfitting:1 assumed:2 latent:8 table:2 nature:1 robust:1 improving:1 investigated:1 european:2 complex:5 domain:1 diag:2 did:2 aistats:1 main:1 noise:42 repeated:1 allowed:1 meyer:1 explicit:1 inverts:1 sf:2 third:1 young:1 specific:2 emphasized:1 covariate:1 showing:1 symbol:1 explored:2 r2:5 ments:1 concern:1 reproduction:3 naively:1 adding:1 merging:1 corr:2 diagonalization:1 magnitude:1 phenotype:11 generalizing:1 logarithmic:1 explore:2 univariate:2 failed:1 expressed:3 corresponds:2 determines:1 environmental:1 acm:1 marked:2 goal:1 shared:5 absence:1 feasible:1 leonid:1 determined:2 except:2 reducing:2 total:2 select:1 arises:1 assessed:2 bioinformatics:1 violated:2 relevance:1 evaluate:2 mcmc:1 avoiding:1 correlated:2 |
4,519 | 509 | Towards Faster Stochastic Gradient Search
Christian Darken and John Moody
Yale Computer Science, P.O. Box 2158, New Haven, CT 06520
Email: [email protected]
Abstract
Stochastic gradient descent is a general algorithm which includes LMS,
on-line backpropagation, and adaptive k-means clustering as special cases.
The standard choices of the learning rate 1] (both adaptive and fixed functions of time) often perform quite poorly. In contrast, our recently proposed class of "search then converge" learning rate schedules (Darken and
Moody, 1990) display the theoretically optimal asymptotic convergence rate
and a superior ability to escape from poor local minima. However, the user
is responsible for setting a key parameter. We propose here a new methodology for creating the first completely automatic adaptive learning rates
which achieve the optimal rate of convergence.
Intro d uction
The stochastic gradient descent algorithm is
6. Wet)
= -1]\7w E(W(t), X(t)).
where 1] is the learning rate, t is the "time", and X(t) is the independent random
exemplar chosen at time t. The purpose of the algorithm is to find a parameter
vector W which minimizes a function G(W) which for learning algorithms has the
form ?x E(W, X), i.e. G is the average of an objective function over the exemplars,
labeled E and X respectively. We can rewrite 6.W(t) in terms of G as
6. Wet)
= -1][\7wG(W(t)) + e(t, Wet))],
e
where the are independent zero-mean noises. Stochastic gradient descent may be
preferable to deterministic gradient descent when the exemplar set is increasing in
size over time or large, making the average over exemplars expensive to compute.
1009
1010
Darken and Moody
Additionally, the noise in the gradient can help the system escape from local minima.
The fundamental algorithmic issue is how to best adjust 11 as a function of
time and the exemplars?
State of the Art Schedules
The usual non-adaptive choices of 11 (i.e. 11 depends on the time only) often yield
poor performance. The simple expedient of taking 11 to be constant results in
persistent residual fluctuations whose magnitude and the resulting degradation of
system performance are difficult to anticipate (see fig. 3). Taking a smaller constant
11 reduces the magnitude of the fluctuations, but seriously slows convergence and
causes problems with metastable local minima. Taking l1(t)
cit, the common
choice in the stochastic approximation literature of the last forty years, typically
results in slow convergence to bad solutions for small c, and parameter blow-up for
small t if c is large (Darken and Moody, 1990).
=
The available adaptive schedules (i.e. 11 depends on the time and on previous exemplars) have problems as well. Classical methods which involve estimating the hessian of G are often unusable because they require O(N2) storage and computation
for each update, which is too expensive for large N (many parameter systemse.g. large neural nets). Methods such as those of Fabian (1960) and Kesten (1958)
require the user to specify an entire function and thus are not practical methods
as they stand. The delta-bar-delta learning rule, which was developed in the context of deterministic gradient descent (Jacobs, 1988), is often useful in locating the
general vicinity of a solution in the stochastic setting. However it hovers about the
solution without converging (see fig. 4). A schedule developed by Urasiev is proven
to converge in principle, but in practice it converges slowly if at all (see fig. 5). The
literature is widely scattered over time and disciplines, however to our knowledge
no published O(N) technique attains the optimal convergence speed.
Search-Then-Converge Schedules
Our recently proposed solution is the "search then converge" learning rate schedule.
11 is chosen to be a fixed function of time such as the following:
( )
11 t
=
1 +..?1
1/D T
110 1 + ..? 1 + T
1/D
T
t2
T2
This function is approximately constant with value 110 at times small compared to T
(the "search phase"). At times large compared with T (the "converge phase"), the
function decreases as cit. See for example the eta vs. time curves for figs. 6 and
7. This schedule has demonstrated a dramatic improvement in convergence speed
and quality of solution as compared to the traditional fixed learning rate schedule
for k-means clustering (Darken and Moody, 1990). However, these benefits apply
to supervised learning as well. Compare the error curve of fig. 3 with those of figs.
6 and 7.
=
This schedule yields optimally fast asymptotic convergence if c > c*, c*
1/2a,
where a is the smallest eigenvalue of the hessian of the function G (defined above)
Towards Faster Stochastic Gradient Search
Little Drift
Much Drift
Figure 1: Two contrasting parameter vector trajectories illustrating the notion of
drift
at the pertinent minimum (Fabian, 1968) (Major and Revesz, 1973) (Goldstein,
1987). The penalty for choosing c < c? is that the ratio of the excess error given c
too small to the excess error with c large enough gets arbitrarily large as training
time grows, i.e.
. Ec<c?
11m
00,
t_oo Ec>c.
where E is the excess error above that at the minimum. The same holds for the
ratio of the two distances to the location of the minimum in parameter space.
=
While the above schedule works well, its asymptotic performance depends upon
the user's choice of c. Since neither 1}o nor T affects the asymptotic behavior of
the system, we will discuss their selection elsewhere. Setting c > c?, however,
is vital. Can such a c be determined automatically? Directly estimating a with
conventional methods (by calculating the smallest eigenvalue of the hessian at our
current estimate of the minimum) is too computationally demanding. This would
take at least O(N2) storage and computation time for each estimate, and would
have to be done repeatedly (N is the number of parameters). We are investigating
the possibility of a low complexity direct estimation of a by performing a second
optimization. However here we take a more unusual approach: we shall determine
whether c is large enough by observing the trajectory ofthe parameter (or "weight")
vector.
On-line Determination of Whether
c
< c*
We propose that excessive correlation in the parameter change vectors (i.e. "drift")
indicates that c is too small (see fig. 1). We define the drift as
D(t)
=~ d~(t)
k
dk(t)
=JT [{(8k(t) _(~~~tt~}T )2}T )1/2
where 8k (t) is the change in the kth component of the parameter vector at time t and
the angled brackets denote an average over T parameter changes. We take T
at,
where a ? 1. Notice that the numerator is the average parameter step while the
=
1011
1012
Darken and Moody
1.11 On..t.alll-Ublenbeck Prooen ill OD. Dimenaon
III'
DJI
/
III'
III'
10-'
Figure 2: (Left) An Ornstein-Uhlenbeck process. This process is zero-mean, gaussian, and stationary (in fact strongly ergodic). It may be thought of as a random
walk with a restoring force towards zero. (Right) Measurement of the drift for the
runs c = .1c? and c = lOc? which are discussed in figs. 7 and 8 below.
denominator is the standard deviation of the steps. As a point of reference, if the 61;
were independent normal random variables, then the dl; would be "T-distributed"
with T degrees of freedom, i.e. approximately unit-variance normals for moderate
to large T. We find that 61; may also be taken to be the kth component of the noisy
gradient to the same effect.
Asymptotically, we will take the learning rate to go as cft. Choosing c too small
results in a slow drift of the parameter vector towards the solution in a relatively
linear trajectory. When c> c? however, the trajectory is much more jagged. Compare figs. 7 and 8. More precisely, we find that D(t) blows up like a power of t
when c is too small, but remains finite otherwise. Our experiments confirm
this (for an example, see fig. 2). This provides us with a signal to use in future
adaptive learning rate schemes for ensuring that c is large enough.
The bold-printed statement above implies that an arbitrarily small change in c which
moves it to the opposite side of c??has dramatic consequences for the behavior of the
drift. The following rough argument outlines how one might prove this statement,
focusing on the source of this interesting discontinuity in behavior. We simplify the
argument by taking the 61; 's to be gradient measurements as mentioned above. We
consider a one-dimensional problem, and modify d 1 to be ..;T{6dT (i.e. we ignore
the denominator). Then since T = at as stated above, we approximate
d1
=Vr{6} (t))r ~ (Ji61 (t))T = (Ji[VG(t) + e(t)])r
e
Recall the definitions of G and from the introduction above. As t -+ 00, VG(t) -+
K[W(t) - Wo] for the appropriate K by the Taylor's expansion for G around Wo,
the location of the local minimum. Thus
lim d 1 ~ (K Ji[W(t) '_00
Define X(t)
= Jt[W(t) -
Wo])r + (Jie(t)}r
Wo]. Now according to (Kushner, 1978), X(e') converges
Towards Faster Stochastic Gradient Search
1013
Constant 11=0.1
~
01-
-11
III'
lu-
lU'
Um.
10"
"I
10"
10"
10-"
_ 10-'
1
10-"
1 10-"
j leru
110-"
iller"
P. . ~,.. . . . wcec..
10
10"
Figure 3: The constant 1/ schedule, commonly used in training backpropagation
networks, does not converge in the stochastic setting.
in distribution to the well-known Ornstein-Uhlenbeck process (fig. 2) when c > c?.
By extending this work, one can show that X(t) converges in distribution to a
deterministic power law, tP with p > 0 when c < c?. Since the e's are independent
and have uniformly bounded variances for smooth objective functions, the second
term converges in distribution to a finite-variance random variable. The first term
converges to a finite-variance random variable if c > c?, but to a power of t if c < c? .
Qualitative Behavior of Schedules
We compare several fixed and adaptive learning rate schedules on a toy stochastic
problem. Notice the difficulties that are encountered by some schedules even on a
fairly easy problem due to noise in the gradient. The problem is learning a two
parameter adaline in the presence of independent uniformly distributed [-0.5,0.5]
noise on the exemplar labels. Exemplars were independently uniformly distributed
on [-1,1]. The objective function has a condition number of 10, indicating the
presence of the narrow ravine indicated by the elliptical isopleths in the figures. All
runs start from the same parameter (weight) vector and receive the same sequence
of exemplars. The misadjustment is defined as the Euclidean distance in parameter
space to the minimum. Multiples of this quantity bound the usual sum of squares
error measure above and below, i.e. sum of squares error is roughly proportional to
the misadjustment. Results are presented in figs. 3-8.
Conclusions
Our empirical tests agree with our theoretical expectations that drift can be used to
determine whether the crucial parameter c is large enough. Using this statistic, it
will be possible to produce the first fully automatic learning rates which converge at
optimal speed. We are currently investigating candidate schedules which we expect
to be useful for large-scale LMS, backpropagation, and clustering applications.
1014
Darken and Moody
Stochastic Della-Dar-Delta
Pial thouI.md p"""" \lll)GlDl'1
Figure 4: Delta-bar-deita (Jacobs, 1988) was apparently developed for use with
deterministic gradient descent. It is also useful for stochastic problems with little
noise, which is however not the case for this test problem. In this example TJ
increases from its initial value, and then stabilizes. We use the algorithm exactly as
it appears in Jacobs' paper with noisy gradients substituted for the true gradient
(which is unavailable in the stochastic setting). Parameters used were TJo = 0.1,
B 0.3, K
0.01, and ? 0.1.
=
=
=
l~rTTmmrTn~-rrrm~nm~~~ro~~
Urasiev
10-'
10'"
10'"
10"
i---'----'''\..
10-<'
10....
" 10-'
.. 10"
10"
10-'"
10-11
10-"
10-1?
IO-~~O~~~~~Lll~~~~~~~~I~
Fint ~ p-..adls VMIOra
10'
Figure 5: Urasiev's technique (Urasiev, 1988) varies TJ erratically over several orders
of magnitude. The large fluctuations apparently cause TJ to completely stop changing
after a while due to finite precision effects. Parameters used were D = 0.2, R = 2,
and U = 1.
Towards Faster Stochastic Gradient Search
Fixed Search-Then-Converge, c=c*
104
10"
~
10~
=
Figure 6: The fixed search-then-converge schedule with c c'" gives excellent performance. However if c'" is not known, you may get performance as in the next two
examples. An adaptive technique is called for.
Fixed Search-Then-Converge, c=10c*
Figure 7: Note that taking c > c'" slows convergence a bit as compared to the c = c'"
example in fig. 6, though it could aid escape from bad local minima in a nonlinear
problem.
1015
1016
Darken and Moody
Fixed Search-Then-Converge, c=O.lc*
10-1
I~
10-"
-! 10-"
10'"
10"
10
10"
10
10'
10'
"i:'IO-1
1
10-"
110-<
ilo'"
110-<
a10-"
Figure 8: This run illustrates the penalty to be paid if c
< c?.
References
C. Darken and J. Moody. (1990) Note on learning rate schedules for stochastic optimization. Advances in Neural Information Processing Systems 9. 832-838.
V.Fabian. (1960) Stochastic approximation methods.
123-159.
Czechoslovak Math J. 10 (85)
V. Fabian. (1968) On asymptotic normality in stochastic approximation. Ann. Math.
Stat. 39(4):1327-1332.
1. Goldstein. (1987) Mean square optimality in the continuous time Robbins Monro procedure. Technical Report DRB-306. Department of Mathematics, University of Southern
California.
R. Jacobs. (1988) Increased rates of convergence through learning rate adaptation. Neural
Networks. 1:295-307.
H. Kesten. (1958) Accelerated stochastic approximation. Annals of Mathematical Statistics. 29:41-59.
H. Kushner. (1978) Rates of convergence for sequential Monte Carlo optimization methods. SIAM J. Control and Optimization. 16:150-168.
P. Major and P.Revesz. (1973) A limit theorem for the Robbins-Monro approximation. Z.
Wahrscheinlichkeitstheorie verw. Geb. 27:79-86.
S. Urasiev. (1988) Adaptive stochastic quasigradient procedures. In Numerical Techniques for Stochastic Optimization. Y. Ermoliev and R. Wets Eds. Springer-Verlag.
| 509 |@word illustrating:1 jacob:4 paid:1 dramatic:2 initial:1 loc:1 seriously:1 current:1 elliptical:1 od:1 john:1 numerical:1 christian:1 pertinent:1 update:1 v:1 stationary:1 provides:1 math:2 location:2 mathematical:1 direct:1 persistent:1 qualitative:1 prove:1 theoretically:1 roughly:1 behavior:4 nor:1 automatically:1 little:2 lll:2 increasing:1 estimating:2 bounded:1 minimizes:1 developed:3 contrasting:1 preferable:1 um:1 exactly:1 ro:1 control:1 unit:1 local:5 modify:1 limit:1 consequence:1 io:2 fluctuation:3 approximately:2 might:1 czechoslovak:1 practical:1 responsible:1 restoring:1 practice:1 backpropagation:3 procedure:2 empirical:1 thought:1 printed:1 get:2 selection:1 storage:2 context:1 conventional:1 deterministic:4 demonstrated:1 go:1 independently:1 ergodic:1 rule:1 notion:1 annals:1 user:3 expensive:2 labeled:1 decrease:1 mentioned:1 complexity:1 rewrite:1 upon:1 completely:2 expedient:1 fast:1 monte:1 choosing:2 quite:1 whose:1 widely:1 otherwise:1 wg:1 ability:1 statistic:2 noisy:2 sequence:1 eigenvalue:2 net:1 propose:2 erratically:1 adaptation:1 adaline:1 poorly:1 achieve:1 convergence:10 extending:1 produce:1 converges:5 help:1 verw:1 stat:1 exemplar:9 c:1 implies:1 stochastic:20 require:2 anticipate:1 hold:1 around:1 normal:2 algorithmic:1 lm:2 stabilizes:1 major:2 angled:1 smallest:2 purpose:1 estimation:1 wet:4 label:1 currently:1 robbins:2 rough:1 gaussian:1 improvement:1 indicates:1 contrast:1 attains:1 typically:1 entire:1 hovers:1 issue:1 ill:1 art:1 special:1 fairly:1 excessive:1 future:1 t2:2 report:1 simplify:1 haven:1 escape:3 phase:2 freedom:1 possibility:1 adls:1 adjust:1 tjo:1 bracket:1 tj:3 ilo:1 taylor:1 euclidean:1 walk:1 theoretical:1 increased:1 eta:1 tp:1 deviation:1 too:6 optimally:1 varies:1 fundamental:1 siam:1 discipline:1 moody:9 nm:1 slowly:1 wahrscheinlichkeitstheorie:1 creating:1 toy:1 blow:2 bold:1 includes:1 jagged:1 depends:3 ornstein:2 observing:1 apparently:2 start:1 monro:2 square:3 variance:4 yield:2 ofthe:1 lu:2 carlo:1 trajectory:4 published:1 ed:1 email:1 definition:1 a10:1 stop:1 recall:1 knowledge:1 lim:1 schedule:17 goldstein:2 focusing:1 appears:1 dt:1 supervised:1 methodology:1 specify:1 done:1 box:1 strongly:1 though:1 correlation:1 nonlinear:1 quality:1 indicated:1 grows:1 effect:2 true:1 vicinity:1 numerator:1 outline:1 tt:1 l1:1 recently:2 superior:1 common:1 dji:1 ji:2 discussed:1 measurement:2 automatic:2 drb:1 mathematics:1 moderate:1 verlag:1 arbitrarily:2 misadjustment:2 minimum:10 converge:11 forty:1 determine:2 signal:1 multiple:1 reduces:1 smooth:1 technical:1 faster:4 determination:1 ensuring:1 converging:1 denominator:2 expectation:1 uhlenbeck:2 receive:1 source:1 crucial:1 presence:2 vital:1 enough:4 iii:4 easy:1 affect:1 opposite:1 whether:3 penalty:2 wo:4 kesten:2 locating:1 hessian:3 cause:2 repeatedly:1 dar:1 jie:1 useful:3 involve:1 cit:2 notice:2 delta:4 shall:1 key:1 changing:1 neither:1 asymptotically:1 year:1 sum:2 run:3 you:1 bit:1 bound:1 ct:1 display:1 yale:2 encountered:1 precisely:1 speed:3 argument:2 optimality:1 performing:1 relatively:1 department:1 metastable:1 according:1 poor:2 smaller:1 making:1 taken:1 computationally:1 agree:1 remains:1 discus:1 unusual:1 available:1 apply:1 appropriate:1 clustering:3 kushner:2 calculating:1 classical:1 objective:3 move:1 quantity:1 intro:1 fint:1 usual:2 traditional:1 md:1 southern:1 gradient:16 kth:2 distance:2 iller:1 revesz:2 ratio:2 difficult:1 statement:2 slows:2 stated:1 perform:1 darken:10 fabian:4 finite:4 descent:6 drift:9 california:1 narrow:1 discontinuity:1 bar:2 below:2 power:3 demanding:1 difficulty:1 force:1 residual:1 geb:1 normality:1 scheme:1 literature:2 asymptotic:5 law:1 fully:1 expect:1 interesting:1 proportional:1 proven:1 vg:2 degree:1 cft:1 principle:1 elsewhere:1 last:1 side:1 taking:5 benefit:1 distributed:3 curve:2 ermoliev:1 stand:1 commonly:1 adaptive:9 ec:2 excess:3 approximate:1 ignore:1 confirm:1 investigating:2 ravine:1 search:12 alll:1 continuous:1 additionally:1 unavailable:1 expansion:1 excellent:1 substituted:1 noise:5 n2:2 fig:13 scattered:1 slow:2 vr:1 aid:1 precision:1 lc:1 candidate:1 theorem:1 unusable:1 bad:2 jt:2 dk:1 dl:1 uction:1 sequential:1 magnitude:3 illustrates:1 springer:1 ann:1 towards:6 change:4 determined:1 uniformly:3 degradation:1 called:1 indicating:1 accelerated:1 d1:1 della:1 |
4,520 | 5,090 | Spike train entropy-rate estimation using hierarchical
Dirichlet process priors
Karin Knudson
Department of Mathematics
[email protected]
Jonathan W. Pillow
Center for Perceptual Systems
Departments of Psychology & Neuroscience
The University of Texas at Austin
[email protected]
Abstract
Entropy rate quantifies the amount of disorder in a stochastic process. For spiking
neurons, the entropy rate places an upper bound on the rate at which the spike train
can convey stimulus information, and a large literature has focused on the problem of estimating entropy rate from spike train data. Here we present Bayes least
squares and empirical Bayesian entropy rate estimators for binary spike trains using hierarchical Dirichlet process (HDP) priors. Our estimator leverages the fact
that the entropy rate of an ergodic Markov Chain with known transition probabilities can be calculated analytically, and many stochastic processes that are
non-Markovian can still be well approximated by Markov processes of sufficient
depth. Choosing an appropriate depth of Markov model presents challenges due
to possibly long time dependencies and short data sequences: a deeper model can
better account for long time dependencies, but is more difficult to infer from limited data. Our approach mitigates this difficulty by using a hierarchical prior to
share statistical power across Markov chains of different depths. We present both
a fully Bayesian and empirical Bayes entropy rate estimator based on this model,
and demonstrate their performance on simulated and real neural spike train data.
1
Introduction
The problem of characterizing the statistical properties of a spiking neuron is quite general, but two
interesting questions one might ask are: (1) what kind of time dependencies are present? and (2) how
much information is the neuron transmitting? With regard to the second question, information theory
provides quantifications of the amount of information transmitted by a signal without reference to
assumptions about how the information is represented or used. The entropy rate is of interest as a
measure of uncertainty per unit time, an upper bound on the rate of information transmission, and
an intermediate step in computing mutual information rate between stimulus and neural response.
Unfortunately, accurate entropy rate estimation is difficult, and estimates from limited data are often severely biased. We present a Bayesian method for estimating entropy rates from binary data
that uses hierarchical Dirichlet process priors (HDP) to reduce this bias. Our method proceeds by
modeling the source of the data as a Markov chain, and then using the fact that the entropy rate of
a Markov chain is a deterministic function of its transition probabilities. Fitting the model yields
parameters relevant to both questions (1) and (2) above: we obtain both an approximation of the
underlying stochastic process as a Markov chain, and an estimate of the entropy rate of the process.
For binary data, the HDP reduces to a hierarchy of beta priors, where the prior probability over g, the
probability of the next symbol given a long history, is a beta distribution centered on the probability
of that symbol given a truncated, one-symbol-shorter, history. The posterior over symbols given
1
a certain history is thus ?smoothed? by the probability over symbols given a shorter history. This
smoothing is a key feature of the model.
The structure of the paper is as follows. In Section 2, we present definitions and challenges involved
in entropy rate estimation, and discuss existing estimators. In Section 3, we discuss Markov models
and their relationship to entropy rate. In Sections 4 and 5, we present two Bayesian estimates of
entropy rate using the HDP prior, one involving a direct calculation of the posterior mean transition
probabilities of a Markov model, the other using Markov Chain Monte Carlo methods to sample
from the posterior distribution of the entropy rate. In Section 6 we compare the HDP entropy rate
estimators to existing entropy rate estimators including the context tree weighting entropy rate estimator from [1], the string-parsing method from [2], and finite-length block entropy rate estimators
that makes use of the entropy estimator of Nemenman, Bialek and Shafee [3] and Miller and Madow
[4]. We evaluate the results for simulated and real neural data.
2
Entropy Rate Estimation
In information theory, the entropy of a random variable is a measure of the variable?s average unpredictability. The entropy of a discrete random variable X with possible values {x1 , ..., xn } is
H(X) = ?
n
X
p(xi ) log(xi )
(1)
i=1
Entropy can be measured in either nats or bits, depending on whether we use base 2 or e for the
logarithm. Here, all logarithms discussed will be base 2, and all entropies will be given in bits.
While entropy is a property of a random variable, entropy rate is a property of a stochastic process,
such as a time series, and quantifies the amount of uncertainty per symbol. The neural and simulated
data considered here will be binary sequences representing the spike train of a neuron, where each
symbol represents either the presence of a spike in a bin (1) or the absence of a spike (0). We view
the data as a sample path from an underlying stochastic process. To evaluate the average uncertainty
of each new symbol (0 or 1) given the previous symbols - or the amount of new information per
symbol - we would like to compute the entropy rate of the process.
For a stochastic process {Xi }?
i=1 the entropy of the random vector (X1 , ..., Xk ) grows with k; we
are interested in how it grows. If we define the block entropy Hk to be the entropy of the distribution
of length-k sequences of symbols, Hk = H(Xi+1 , ...Xi+k ), then the entropy rate of a stochastic
process {Xi }?
i=1 is defined by
1
h = lim Hk
(2)
k?? k
when the limit exists (which, for stationary stochastic processes, it must). There are two other
definitions for entropy rate, which are equivalent to the first for stationary processes:
h=
h=
lim Hk+1 ? Hk
(3)
lim H(Xi+1 |Xi , Xi?1 , ...Xi?k )
(4)
k??
k??
We now briefly review existing entropy rate estimators, to which we will compare our results.
2.1
Block Entropy Rate Estimators
Since much work has been done to accurately estimate entropy from data, Equations (2) and (3)
suggest a simple entropy rate estimator, which consists of choosing first a block size k and then
a suitable entropy estimator with which to estimate Hk . A simple such estimator is the ?plugin?
entropy estimator, which approximates the probability of each length-k block (x1 , ..., xk ) by the
proportion of total length-k blocks observed that are equal to (x1 , ..., xk ). For binary data there are
2
2k possible length-k blocks. When N denotes the data length and ci the number of observations of
each block in the data, we have:
k
?
H
plugin =
2
X
i=1
?
ci
ci
log
N
N
(5)
?
from which we can immediately estimate the entropy rate with hplugin,k = H
plugin /k, for some
appropriately chosen k (the subject of ?appropriate choice? will be taken up in more detail later).
We would expect that using better block entropy estimators would yield better entropy rate estimators, and so we also consider two other block based entropy rate estimators. The first uses the
Bayesian entropy estimator HN SB from Nemenman, Shafee and Bialek [3], which gives a Bayesian
least squares estimate for entropy given a mixture-of-Dirichlet prior. The second uses the Miller and
Madow estimator [4], which gives a first-order correction to the (often significantly biased) plugin
entropy estimator of Equation 5:
k
?MM =
H
2
X
i=1
?
ci
ci
A?1
log
+
log(e)
N
N
2N
(6)
where A is the size of the alphabet of symbols (A = 2 for the binary data sequences presently consid? N SB /k and hM M,k = H
? M M /k
ered). For a given k, we obtain entropy rate estimators hN SB,k = H
by applying the entropy estimators from [3] and [4] respectively to the empirical distribution of the
length-k blocks.
While we can improve the accuracy of these block entropy rate estimates by choosing a better
entropy estimator, choosing the block size k remains a challenge. If we choose k to be small,
we miss long time dependencies in the data and tend to overestimate the entropy; intuitively, the
time series will seem more unpredictable than it actually is, because we are ignoring long-time
dependencies. On the other hand, as we consider larger k, limited data leads to underestimates of
the entropy rate. See the plots of hplugin , hN SB , and hM M in Figure 2d for an instance of this effect
of block size on entropy rate estimates. We might hope that in between the overestimates of entropy
rate for short blocks and the the underestimates for longer blocks, there is some ?plateau? region
where the entropy rate stays relatively constant with respect to block size, which we could use as a
heuristic to select the proper block length [1]. Unfortunately, the block entropy rate at this plateau
may still be biased, and for data sequences that are short with respect to their time dependencies,
there may be no discernible plateau at all ([1], Figure 1).
2.2
Other Entropy Rate Estimators
Not all existing techniques for entropy rate estimation involve an explicit choice of block length.
The estimator from [2], for example, parses the full string of symbols in the data by starting from
the first symbol, and sequentially removing and counting as a ?phrase? the shortest substring that
has not yet appeared. When M is the number of distinct phrases counted in this way, we obtain the
estimator: hLZ = M
N log N , free from any explicit block length parameters.
A fixed block length model like the ones described in the previous section uses the entropy of the distribution of all the blocks of a some length - e.g. all the blocks in the terminal nodes of a context tree
like the one in Figure 1a. In the context tree weighting (CTW) framework of [1], the authors instead
use a minimum descriptive length criterion to weight different tree topologies, which have within
the same tree terminal nodes corresponding to blocks of different
R lengths. They use this weighting
to generate Monte Carlo samples and approximate the integral h(?)p(?|T, data)p(T|data) d? dT,
in which T represents the tree topology, and ? represents transition probabilities associated with the
terminal nodes of the tree.
In our approach, the HDP prior combined with a Markov model of our data will be a key tool in
overcoming some of the difficulties of choosing a block-length appropriately for entropy rate estimation. It will allow us to choose a block length that is large enough to capture possibly important
long time dependencies, while easing the difficulty of estimating the properties of these long time
dependencies from short data.
3
Figure 1: A depth-3 hierarchical Dirichlet prior for binary data
3
Markov Models
The usefulness of approximating our data source with a Markov model comes from (1) the flexibility
of Markov models including their ability to well approximate even many processes that are not truly
Markovian, and (2) the fact that for a Markov chain with known transition probabilities the entropy
rate need not be estimated but is in fact a deterministic function of the transition probabilities.
A Markov chain is a sequence of random variables that has the property that the probability
of the next state depends only on the present state, and not on any previous states. That is,
P (Xi+1 |Xi , ..., X1 ) = P (Xi+1 |Xi ). Note that this property does not mean that for a binary sequence the probability of each 0 or 1 depends only on the previous 0 or 1, because we consider the
state variables to be strings of symbols of length k rather than individual 0s and 1s, Thus we will
discuss ?depth-k? Markov models, where the probability of the next state depends only previous k
symbols, or what we will call the length-k context of the symbol. With a binary alphabet, there are
2k states the chain can take, and from each state s, transitions are possible only to two other states.
(So that for, example, 110 can transition to state 101 or state 100, but not to any other state). Because
only two transitions are possible from each state, the transition probability distribution from each s
is completely specified by only one parameter, which we denote gs , the probability of observing a 1
given the context s.
The entropy rate of an ergodic Markov chain with finite state set A is given by:
X
h=
p(s)H(x|s),
(7)
s?A
where p(s) is the stationary probability associated with state s, and H(x|s) is the entropy of the
distribution of possible transitions from state s. The vector of stationary state probabilities p(s) for
all s is computed as a left eigenvector of the transition matrix T:
X
p(s) = 1
(8)
p(s)T = p(s) ,
s
Since each row of the transition matrix T contains only two non-zero entries, gs , and 1 ? gs , p(s)
can be calculated relatively quickly. With equations 7 and 8, h can be calculated analytically from
the vector of all 2k transition probabilities {gs }. A Bayesian estimator of entropy rate based on a
Markov model of order k is given by
Z
?
hBayes = h(g)p(g|data)dg
(9)
where g = {gs : |s| = k}, h is the deterministic function of g given by Equations 7 and 8, and
p(g|data) ? p(data|g)p(g) given some appropriate prior over g.
Modeling a time series as a Markov chain requires a choice of the depth of that chain, so we have
not avoided the depth selection problem yet. What will actually mitigate the difficulty here is the
use of hierarchical Dirichlet process priors.
4
Hierarchical Dirichlet Process priors
We describe a hierarchical beta prior, a special case of the hierarchical Dirichlet process (HDP),
which was presented in [5] and applied to problems of natural language processing in [6] and [7].
4
The true entropy rate h = limk?? Hk /k captures time dependencies of infinite depth. Therefore
? Bayes in Equation 9 we would like to choose some large k. However, it is
to calculate the estimate h
difficult to estimate transition probabilities for long blocks with short data sequences, so choosing
large k may lead to inaccurate posterior estimates for the transition probabilities g. In particular,
shorter data sequences may not even have observations of all possible symbol sequences of a given
length.
This motivates our use of hierarchical priors as follows. Suppose we have a data sequence in which
the subsequence 0011 is never observed. Then we would not expect to have a very good estimate
for g0011 ; however, we could improve this by using the assumption that, a priori, g0011 should be
similar to g011 . That is, the probability of observing a 1 after the context sequence 0011 should be
similar to that of seeing a 1 after 011, since it might be reasonable to assume that context symbols
from the more distant past matter less. Thus we choose for our prior:
gs |gs0 ? Beta(?|s| gs0 , ?|s| (1 ? gs0 ))
(10)
where s0 denotes the context s with the earliest symbol removed. This choice gives the prior
distribution of gs mean gs0 , as desired. We continue constructing the prior with gs00 |gs0 ?
Beta(?|s0 | gs00 , ?|s0 | (1 ? gs00 )) and so on until g[] ? Beta(?0 p? , ?0 (1 ? p? )) where g[] is the probability of a spike given no context information and p? is a hyperparameter reflecting our prior belief
about the probability of a spike. This hierarchy gives our prior the tree structure as shown in in
Figure 1. A priori, the distribution of each transition probability is centered around the transition
probability from a one-symbol-shorter block of symbols. As long as the assumption that more distant contextual symbols matter less actually holds (at least to some degree), this structure allows
the sharing of statistical information across different contextual depths. We can obtain reasonable
estimates for the transition probabilities from long blocks of symbols, even from data that is so short
that we may have few (or no) observations of each of these long blocks of symbols.
We could use any number of distributions with mean gs0 to center the prior distribution of gs at gs0 ;
we use Beta distributions because they are conjugate to the likelihood. The ?|s| are concentration
parameters which control how concentrated the distribution is about its mean, and can also be estimated from the data. We assume that there is one value of ? for each level in the hierarchy, but one
could also fix alpha to be constant throughout all levels, or let it vary within each level.
This hierarchy of beta distributions is a special case of the hierarchical Dirichlet process . A Dirichlet
process (DP) is a stochastic process whose sample paths are each probability distributions. Formally,
if G is a finite measure on a set S, then X ? DP (?, G) if for any finite measurable partition of
the sample space (A1 , ...An ) we have that X(A1 ), ...X(An ) ? Dirichlet(?G(A1 ), ..., ?G(An )).
Thus for a partition into only two sets, the Dirichlet process reduces to a beta distribution, which
is why when we specialize the HDP to binary data, we obtain a hierarchical beta distribution. In
[5] the authors present a hierarchy of DPs where the base measure for each DP is again a DP. In
our case, for example, we have G011 = {g011 , 1 ? g011 } ? DP (?3 , G11 ), or more generally,
Gs ? DP (?|s| , Gs0 ).
5
Empirical Bayesian Estimator
One can generate a sequence from an HDP by drawing each subsequent symbol from the transition
probability distribution associated with its context, which is given recursively by [6] :
(
?|s|
cs1
0
if s 6= ?
?|s| +cs + ?|s| +cs p(1|s )
(11)
p(1|s) =
c1
?0
if s = ?
?0 +N + ?0 +N p?
where N is the length of the data string, p? is a hyperparameter representing the a prior probability
of observing a 1 given no contextual information, cs1 is the number of times the symbol sequence s
followed by a 1 was observed, and cs is the number of times the symbol sequence s was observed.
We can calculate the posterior predictive distribution g
?pr which is specified by the 2k values {gs =
p(1|s) : |s| = k} by using counts c from the data and performing the above recursive calculation
to estimate gs for each of the 2k states s. Given the estimated Markov transition probabilities g
?pr
we then have an empirical Bayesian entropy rate estimate via Equations 7 and 8. We denote this
estimator hempHDP . Note that while g
?pr is the posterior mean of the transition probabilities, the
5
entropy rate estimator hempHDP is no longer a fully Bayesian estimate, and is not equivalent to
? Bayes of equation 9. We thus lose some clarity and the ability to easily compute Bayesian
the h
confidence intervals. However, we gain a good deal of computational efficiency because calculating
hempHDP from g
?pr involves only one eigenvector computation, instead of the many needed for the
MC approximation to the integral in Equation 9. We present a fully Bayesian estimate next.
6
Fully Bayesian Estimator
? Bayes of Equation 9. The integral is not
Here we return to the Bayes least squares estimator h
analytically tractable, but we can approximate it using Markov Chain Monte Carlo techniques. We
use Gibbs sampling to simulate NM C samples g(i) ? g|data from the posterior distribution and
then calculate h(i) from each g(i) via Equations 7 and 8 to obtain the Bayesian estimate:
hHDP =
1
N
MC
X
NM C
i=1
h(i)
(12)
To perform the Gibbs sampling, we need the posterior conditional probabilities of each gs . Because
the parameters of the model have the structure of a tree, each gs for |s| < k is conditionally independent from all but its immediate ancestor in the tree, gs0 , and its two descendants, g0s and g1s . We
have:
p(gs |gs0 , g0s , g1s .?|s| , ?|s|=1 ) ? Beta(gs ; ?|s| gs0 , ?|s| (1 ? gs0 ))Beta(g0s ; ?|s|+1 gs , ?|s|+1 (1 ? gs ))
Beta(g1s ; ?|s|+1 gs , ?|s|+1 (1 ? gs ))
(13)
and we can compute these probabilities on a discrete grid since they are each one dimensional, then
sample the posterior gs via this grid. We used a uniform grid of 100 points on the interval [0,1] for
our computation. For the transition probabilities from the bottom level of the tree {gs : |s| = k}, the
conjugacy of the beta distributions with binomial likelihood function gives the posterior conditional
of gs a recognizable form: p(gs |gs0 , data) = Beta(?k gs0 + cs1 , ?k (1 ? gs0 ) + cs0 ).
In the HDP model we may treat each ? as a fixed hyperparameter, but it is also straightforward to set
a prior over each ? and then sample ? along with the other model parameters with each pass of the
Gibbs sampler. The full posterior conditional for ?i with a uniform prior is (from Bayes? theorem):
p(?i |gs , gs0 , gs1 : |s| = i ? 1) ?
Y
{s:|s|=i?1}
(gs1 gs0 )?i gs ?1 ((1 ? gs1 )(1 ? gs0 ))?i (1?gs )?1
(14)
Beta(?i gs , ?i (1 ? gs ))2
We sampled ? by computing the probabilities above on a grid of values spanning the range [1, 2000].
This upper bound on ? is rather arbitrary, but we verified that increasing the range for ? had little
effect on the entropy rate estimate, at least for the ranges and block sizes considered.
In some applications, the Markov transition probabilities g, and not just the entropy rate, may be
of interest as a description of the time dependencies present in the data. The Gibbs sampler above
yields samples from the distribution g|data, and averaging these NM C samples yields a Bayes least
squares estimator of transition probabilities, g
?gibbsHDP . Note that this estimate is closely related
to the estimate g
?pr from the previous section; with more MC samples, g
?gibbsHDP converges to the
posterior mean g
?pr (when the ? are fixed rather than sampled, to match the fixed ? per level used in
Equation 11).
7
Results
We applied the model to both simulated data with a known entropy rate and to neural data, where
the entropy rate is unknown. We examine the accuracy of the fully Bayesian and empirical Bayesian
entropy rate estimators hHDP and hempHDP , and compare the entropy rate estimators hplugin ,
hN SB , hM M , hLZ [2], and hCT W [1], which are described in Section 2. We also consider estimates
of the Markov transition probabilities g produced by both inference methods.
6
(a)
1
true
NSB
0.8
LZ
0.2
ctw
(c)
1
0.8
HDP
(d) 1
1
0.8
Absolute Error
Entropy Rate Estimate
empHDP
Entropy Rate Estimate
0
(b)
plugin
0.4
00000
10000
01000
11000
00100
10100
01100
11100
00010
10010
01010
11010
00110
10110
01110
11110
00001
10001
01001
11001
00101
10101
01101
11101
00011
10011
01011
11011
00111
10111
01111
11111
p(1|s)
MM
0.6
0.6
0.4
0.2
0.6
0.4
0.2
0
1
10
2
3
10
10
Data Length
4
10
0
1
10
2
3
10
10
Data Length
0.9
0.8
0.7
4
10
2
1
Simulation
6
8 10
Block Length
12
h true
h
Entropy Rate Estimate
7.1
4
Figure 2: Comparison of estimated (a) transition probability and (b,c,d) entropy
rate for data simulated from
a Markov model of depth
5. In (a) and (d), data sets
are 500 symbols long. The
block-based and HDP estimators in (b) and (c) use
block size k = 8. In (b,c,d)
results were averaged over 5
data sequences, and (c) plots
the average absolute value of
the difference between true
and estimated entropy rates.
NSB
hMM transition probabilities set so that transi0.9
We considered
data simulated from a Markov model with
hplugin
tion probabilities from states with similar suffixes are similar
(i.e. the process actually does have the
hLZ
0.8
property that more distant context symbols matter less hthan
more recent ones in determining transictw
tions). We used a depth-5 Markov model, whose true transition
probabilities are shown in black in
hempHDP
Figure0.72a2 , where
each
of
the
32
points
on
the
x
axis
represents
the
probability that the next symbol
h
4
6
8
10 12
HDP
Block Length
is a 1 given the
specified 5-symbol context.
In Figure 2a we compare HDP estimates of transition probabilities of this simulated data to the
plugin estimator of transition probabilities g?s = ccs1
calculated from a 500-symbol sequence. (The
s
other estimators do not include calculating transition probabilities as an intermediate step, and so
cannot be included here.) With a series of 500 symbols, we do not expect enough observations of
each of possible transitions to adequately estimate the 2k transition probabilities, even for rather
modest depths such as k = 5. And indeed, the ?plugin? estimates of transition probabilities do not
match the true transition probabilities well. On the other hand, the transition probabilities estimated
using the HDP prior show the kind of ?smoothing? the prior was meant to encourage, where states
corresponding to contexts with same suffixes have similar estimated transition probabilities.
Lastly, we plot the convergence of the entropy rate estimators with increased length of the data
sequence and the associated error in Figures 2b,c. If the true depth of the model is no larger than
the depth k considered in the estimators, all the estimators considered should converge. We see in
Figure 2c that the HDP-based entropy rate estimates converge quickly with increasing data, relative
to other models.
The motivation of the hierarchical prior was to allow observations of transitions from shorter contexts to inform estimates of transitions from longer contexts. This, it was hoped, would mitigate the
drop-off with larger block-size seen in block-entropy based entropy rate estimators. Figure 2d indicates that for simulated data that is indeed the case, although we do see some bias the fully Bayesian
entropy rate estimator for large block lengths. The empirical Bayes and and fully Bayesian entropy
rate estimators with HDP priors produce estimates that are close to the true entropy rate across a
wider range of block-size.
7.2
Neural Data
We applied the same analysis to neural spike train data collected from primate retinal ganglion cells
stimulated with binary full-field movies refreshed at 100 Hz [8]. In this case, the true transition
probabilities are unknown (and indeed the process may not be exactly Markovian). However, we
calculate the plug-in transition probabilities from a longer data sequence (167,000 bins) so that the
estimates are approximately converged (black trace in Figure 3a), and note that transition probabilities from contexts with the same most-recent context symbols do appear to be similar. Thus the
estimated transition probabilities reflect the idea that more distant context cues matter less, and the
smoothing of the HDP prior appears to be appropriate for this neural data.
7
(a) 1
converged
p(1|s)
0.8
NSB
0.6
MM
0.4
plugin
LZ
0.2
ctw
(c)
1
Absolute Error
0.8
0.6
0.4
0.2
0
2
10
Data Length
4
10
0.3
0.2
0.1
0
HDP
(d) 1
0.4
Entropy Rate Estimate
(b)
Entropy Rate Estimate
empHDP
00000
10000
01000
11000
00100
10100
01100
11100
00010
10010
01010
11010
00110
10110
01110
11110
00001
10001
01001
11001
00101
10101
01101
11101
00011
10011
01011
11011
00111
10111
01111
11111
0
2
4
10
Data Length
10
0.9
0.8
0.7
0.6
0.5
2
4
6
8
10
Block Length
12
Figure 3: Comparison of estimated (a) transition probability and (b,c,d) entropy
rate for neural data. The
?converged? estimates are
calculated from 700s of
data with 4ms bins (167,000
symbols). In (a) and (d),
training data sequences are
500 symbols (2s) long. The
block-based and HDP estimators in (b) and (c) use
block size k = 8. In (b,c,d),
results were averaged over 5
data sequences sampled randomly from the full dataset.
8
Mean Absolute Error
The true entropy rate is also unknown, but again we estimate it using the plugin estimator on a large
data set.
We again note the relatively fast convergence of hHDP and hempHDP in Figures 3b,c, and
0.4
h converged
the long plateau of the estimators
in Figure 3d indicating the relative stability of the HDP entropy
hNSB
0.3
rate estimators
with respect to choice
of model depth.
hMM
0.2
hplugin
Discussion
hLZ
1
hempHDP
0.1
0
hctw
1
Entropy Rate Estimate
2
Estimated Entropy Rate
10
hHDP of the entropy rate of a spike train or arbitrary binary sequence.
We have presented
two estimators
0.8
Data Length
The0.8true entropy rate of a stochastic process involves consideration of infinitely long time depen0.6
dencies.
To make entropy rate estimation
tractable, one can try to fix a maximum depth of time
0.6
0.4
dependencies to be considered, but it is difficult
to choose an appropriate depth that is large enough
0.4
0.2
to take
into account long time dependencies
and small enough relative to the data at hand to avoid
a severe
downward bias of the estimate. We
0 have approached this problem by modeling the data as
0.2
2
4
6
8
10
10
a Markov
chain
Block and
Length estimating transition probabilities
Data Length using a hierarchical prior that links transition
probabilities from longer contexts to transition probabilities from shorter contexts. This allowed us
to choose a large depth even in the presence of limited data, since the structure of the prior allowed
observations of transitions from shorter contexts (of which we have many instances in the data) to
inform estimates of transitions from longer contexts (of which we may have only a few instances).
2
We presented both a fully Bayesian estimator, which allows for Bayesian confidence intervals, and
an empirical Bayesian estimator, which provides computational efficiency. Both estimators show
excellent performance on simulated and neural data in terms of their robustness to the choice of
model depth, their accuracy on short data sequences, and their convergence with increased data.
Both methods of entropy rate estimation also yield estimates of the transition probabilities when
the data is modeled as a Markov chain, parameters which may be of interest in the own right as
descriptive of the statistical structure and time dependencies in a spike train. Our results indicate that
tools from modern Bayesian nonparametric statistics hold great promise for revealing the structure
of neural spike trains despite the challenges of limited data.
Acknowledgments
We thank V. J. Uzzell and E. J. Chichilnisky for retinal data. This work was supported by a Sloan
Research Fellowship, McKnight Scholar?s Award, and NSF CAREER Award IIS-1150186.
8
References
[1] Matthew B Kennel, Jonathon Shlens, Henry DI Abarbanel, and EJ Chichilnisky. Estimating
entropy rates with bayesian confidence intervals. Neural Computation, 17(7):1531?1576, 2005.
[2] Abraham Lempel and Jacob Ziv. On the complexity of finite sequences. Information Theory,
IEEE Transactions on, 22(1):75?81, 1976.
[3] Ilya Nemenman, Fariel Shafee, and William Bialek. Entropy and inference, revisited. arXiv
preprint physics/0108025, 2001.
[4] George Armitage Miller and William Gregory Madow. On the Maximum Likelihood Estimate of the Shannon-Weiner Measure of Information. Operational Applications Laboratory,
Air Force Cambridge Research Center, Air Research and Development Command, Bolling Air
Force Base, 1954.
[5] Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. Hierarchical dirichlet
processes. Journal of the American Statistical Association, 101(476), 2006.
[6] Yee Whye Teh. A hierarchical bayesian language model based on pitman-yor processes. In
Proceedings of the 21st International Conference on Computational Linguistics and the 44th
annual meeting of the Association for Computational Linguistics, pages 985?992. Association
for Computational Linguistics, 2006.
[7] Frank Wood, C?edric Archambeau, Jan Gasthaus, Lancelot James, and Yee Whye Teh. A stochastic memoizer for sequence data. In Proceedings of the 26th Annual International Conference on
Machine Learning, pages 1129?1136. ACM, 2009.
[8] V. J. Uzzell and E. J. Chichilnisky. Precision of spike trains in primate retinal ganglion cells.
Journal of Neurophysiology, 92:780?789, 2004.
9
| 5090 |@word neurophysiology:1 briefly:1 proportion:1 cs0:1 simulation:1 jacob:1 edric:1 recursively:1 series:4 contains:1 past:1 existing:4 contextual:3 yet:2 must:1 parsing:1 subsequent:1 partition:2 distant:4 discernible:1 plot:3 drop:1 stationary:4 cue:1 xk:3 short:7 memoizer:1 blei:1 provides:2 math:1 node:3 revisited:1 along:1 direct:1 beta:16 descendant:1 consists:1 specialize:1 fitting:1 recognizable:1 indeed:3 examine:1 terminal:3 little:1 unpredictable:1 increasing:2 estimating:5 underlying:2 what:3 kind:2 string:4 eigenvector:2 mitigate:2 exactly:1 control:1 unit:1 appear:1 overestimate:2 treat:1 limit:1 severely:1 plugin:9 despite:1 path:2 approximately:1 might:3 easing:1 black:2 archambeau:1 limited:5 range:4 averaged:2 acknowledgment:1 ered:1 recursive:1 block:44 lancelot:1 jan:1 empirical:8 significantly:1 revealing:1 confidence:3 seeing:1 suggest:1 cannot:1 close:1 selection:1 context:22 applying:1 yee:3 equivalent:2 deterministic:3 measurable:1 center:3 straightforward:1 starting:1 focused:1 ergodic:2 disorder:1 immediately:1 estimator:53 shlens:1 stability:1 hierarchy:5 suppose:1 us:4 approximated:1 madow:3 observed:4 bottom:1 preprint:1 capture:2 calculate:4 region:1 removed:1 complexity:1 nats:1 predictive:1 efficiency:2 completely:1 easily:1 represented:1 alphabet:2 train:11 distinct:1 fast:1 describe:1 monte:3 approached:1 choosing:6 quite:1 heuristic:1 larger:3 whose:2 drawing:1 ability:2 statistic:1 beal:1 sequence:25 descriptive:2 relevant:1 fariel:1 flexibility:1 description:1 g1s:3 convergence:3 transmission:1 produce:1 unpredictability:1 converges:1 tions:1 depending:1 wider:1 measured:1 c:3 involves:2 come:1 indicate:1 karin:1 closely:1 stochastic:11 centered:2 jonathon:1 bin:3 fix:2 scholar:1 correction:1 mm:3 hold:2 around:1 considered:6 great:1 matthew:2 vary:1 a2:1 estimation:8 lose:1 kennel:1 utexas:2 tool:2 hope:1 rather:4 avoid:1 ej:1 command:1 earliest:1 likelihood:3 indicates:1 hk:7 inference:2 suffix:2 sb:5 inaccurate:1 ancestor:1 dencies:1 interested:1 ziv:1 priori:2 development:1 smoothing:3 special:2 mutual:1 equal:1 field:1 never:1 sampling:2 represents:4 stimulus:2 few:2 modern:1 randomly:1 dg:1 individual:1 william:2 nemenman:3 interest:3 severe:1 mixture:1 truly:1 chain:15 accurate:1 integral:3 encourage:1 shorter:7 modest:1 tree:11 logarithm:2 desired:1 instance:3 increased:2 modeling:3 markovian:3 phrase:2 gs1:3 entry:1 uniform:2 usefulness:1 dependency:13 gregory:1 combined:1 st:1 international:2 stay:1 off:1 physic:1 michael:1 quickly:2 ilya:1 transmitting:1 again:3 reflect:1 nm:3 hn:4 possibly:2 choose:6 american:1 abarbanel:1 return:1 account:2 retinal:3 matter:4 sloan:1 depends:3 later:1 view:1 tion:1 try:1 observing:3 bayes:9 square:4 air:3 accuracy:3 miller:3 yield:5 bayesian:24 accurately:1 produced:1 substring:1 carlo:3 mc:3 history:4 converged:4 plateau:4 inform:2 sharing:1 definition:2 underestimate:2 involved:1 james:1 associated:4 refreshed:1 di:1 gain:1 sampled:3 dataset:1 ask:1 lim:3 actually:4 reflecting:1 appears:1 cs1:3 dt:1 response:1 done:1 just:1 lastly:1 until:1 hand:3 grows:2 effect:2 true:11 adequately:1 analytically:3 laboratory:1 deal:1 conditionally:1 criterion:1 m:1 whye:3 demonstrate:1 consideration:1 spiking:2 discussed:1 association:3 approximates:1 cambridge:1 gibbs:4 grid:4 mathematics:1 ctw:3 language:2 had:1 henry:1 longer:6 base:4 posterior:12 own:1 recent:2 certain:1 binary:12 continue:1 meeting:1 transmitted:1 minimum:1 seen:1 george:1 converge:2 shortest:1 signal:1 ii:1 full:4 infer:1 reduces:2 match:2 calculation:2 plug:1 long:16 award:2 a1:3 involving:1 arxiv:1 cell:2 c1:1 fellowship:1 interval:4 source:2 appropriately:2 biased:3 limk:1 subject:1 tend:1 hz:1 seem:1 jordan:1 call:1 presence:2 leverage:1 intermediate:2 counting:1 enough:4 psychology:1 topology:2 reduce:1 idea:1 texas:1 whether:1 weiner:1 generally:1 involve:1 amount:4 nonparametric:1 concentrated:1 generate:2 nsf:1 neuroscience:1 estimated:10 per:4 gs0:18 discrete:2 hyperparameter:3 promise:1 key:2 clarity:1 verified:1 wood:1 uncertainty:3 place:1 throughout:1 reasonable:2 bit:2 bound:3 followed:1 g:28 annual:2 simulate:1 performing:1 the0:1 relatively:3 department:2 mcknight:1 conjugate:1 across:3 primate:2 presently:1 intuitively:1 pr:6 taken:1 equation:11 conjugacy:1 remains:1 discus:3 count:1 needed:1 tractable:2 hct:1 hierarchical:16 appropriate:5 lempel:1 robustness:1 denotes:2 dirichlet:13 binomial:1 include:1 linguistics:3 calculating:2 approximating:1 question:3 spike:15 concentration:1 bialek:3 dp:7 link:1 thank:1 simulated:9 hmm:2 mail:1 collected:1 spanning:1 hdp:21 length:32 modeled:1 relationship:1 difficult:4 unfortunately:2 frank:1 trace:1 proper:1 motivates:1 unknown:3 perform:1 teh:3 upper:3 neuron:4 observation:6 markov:29 finite:5 truncated:1 immediate:1 gasthaus:1 smoothed:1 arbitrary:2 overcoming:1 nsb:3 david:1 bolling:1 specified:3 chichilnisky:3 proceeds:1 appeared:1 challenge:4 including:2 belief:1 power:1 suitable:1 difficulty:4 quantification:1 natural:1 force:2 representing:2 improve:2 movie:1 axis:1 hm:3 prior:31 literature:1 review:1 determining:1 relative:3 fully:8 expect:3 par:1 interesting:1 degree:1 sufficient:1 s0:3 share:1 austin:1 row:1 supported:1 free:1 bias:3 allow:2 deeper:1 characterizing:1 absolute:4 pitman:1 yor:1 regard:1 calculated:5 depth:19 transition:51 pillow:2 xn:1 author:2 avoided:1 counted:1 lz:2 transaction:1 approximate:3 alpha:1 sequentially:1 xi:14 subsequence:1 quantifies:2 why:1 stimulated:1 career:1 ignoring:1 operational:1 excellent:1 constructing:1 abraham:1 motivation:1 allowed:2 convey:1 x1:5 precision:1 explicit:2 perceptual:1 weighting:3 removing:1 theorem:1 mitigates:1 g11:1 symbol:37 shafee:3 exists:1 ci:5 hoped:1 downward:1 entropy:99 knudson:1 infinitely:1 ganglion:2 g0s:3 acm:1 conditional:3 absence:1 included:1 infinite:1 sampler:2 averaging:1 miss:1 total:1 pas:1 shannon:1 indicating:1 select:1 formally:1 uzzell:2 consid:1 jonathan:1 meant:1 evaluate:2 |
4,521 | 5,091 | Designed Measurements for Vector Count Data
1
Liming Wang, 1 David Carlson, 2 Miguel Dias Rodrigues, 3 David Wilcox,
1
Robert Calderbank and 1 Lawrence Carin
1
Department of Electrical and Computer Engineering, Duke University
2
Department of Electronic and Electrical Engineering, University College London
3
Department of Chemistry, Purdue University
{liming.w, david.carlson, robert.calderbank, lcarin}@duke.edu
[email protected]
[email protected]
Abstract
We consider design of linear projection measurements for a vector Poisson signal
model. The projections are performed on the vector Poisson rate, X ? Rn+ , and the
observed data are a vector of counts, Y ? Zm
+ . The projection matrix is designed
by maximizing mutual information between Y and X, I(Y ; X). When there is
a latent class label C ? {1, . . . , L} associated with X, we consider the mutual
information with respect to Y and C, I(Y ; C). New analytic expressions for the
gradient of I(Y ; X) and I(Y ; C) are presented, with gradient performed with respect to the measurement matrix. Connections are made to the more widely studied Gaussian measurement model. Example results are presented for compressive
topic modeling of a document corpora (word counting), and hyperspectral compressive sensing for chemical classification (photon counting).
1
Introduction
There is increasing interest in exploring connections between information and estimation theory. For
example, mutual information and conditional mean estimation have been discovered to possess close
interrelationships. The derivative of mutual information in a scalar Gaussian channel [11] has been
expressed in terms of the minimum mean-squared error (MMSE). The connections have also been
extended from the scalar Gaussian to the scalar Poisson channel model [12]. The gradient of mutual
information in a vector Gaussian channel [17] has been expressed in terms of the MMSE matrix. It
has also been found that the relative entropy can be represented in terms of the mismatched MMSE
estimates [23, 24]. Recently, parallel results for scalar binomial and negative binomial channels have
been established [22, 10].
Inspired by the Lipster-Shiryaev formula [16], it has been demonstrated that for certain channels
(or measurement models), investigation of the gradient of mutual information can often lead to a
relatively simple formulation, relative to computing mutual information itself. Further, it has been
shown that the derivative of mutual information with respect to key system parameters also relates to
the conditional mean estimates in other channel settings beyond Gaussian and Poisson models [18].
This paper pursues this overarching theme for a vector Poisson measurement model. Results for
scalar Poisson signal models have been developed recently [12, 1] for signal recovery; the vector
results presented here are new, with known scalar results recovered as a special case. Further, we
consider the gradient of mutual information for Poisson data in the context of classification, for
which there are no previous results, even in the scalar case.
The results we present for optimizing mutual information in vector Poisson measurement models are
general, and may be applied to optical communication systems [15, 13]. The specific applications
that motivate this study are compressive measurements for vector Poisson data. Direct observation
of long vectors of counts may be computationally or experimentally expensive, and therefore it is
of interest to design compressive Poisson measurements. Almost all existing results for compres1
sive sensing (CS) directly or implicitly assume a Gaussian measurement model [6], and extension to
Poisson measurements represents an important contribution of this paper. To the authors knowledge,
the only previous examination of CS with Poisson data was considered in [20], and that paper considered a single special (random) measurement matrix, it did not consider design of measurement
matrices, and the classification problems was not addressed. It has been demonstrated in the context
of Gaussian measurements that designed measurement matrices, using information-theoretic metrics, may yield substantially improved performance relative to randomly constituted measurement
matrices [7, 8, 21]. In this paper we extend these ideas to vector Poisson measurement systems, for
both signal recovery and classification, and make connections to the Gaussian measurement model.
The theory is demonstrated by considering compressive topic modeling of a document corpora, and
chemical classification with a compressive photon-counting hyperspectral camera [25].
2
2.1
Mutual Information for Designed Compressive Measurements
Motivation
A source random variable X ? Rn , with probability density function PX (X), is sent through a
measurement channel, the output of which is characterized by random variable Y ? Rm , with
conditional probability density function PY |X (Y |X); we are interested in the case m < n, relevant
for compressive measurements, although the theory is general. Concerning PY |X (Y |X), in this
paper we focus on Poisson measurement models, but we also make connections to the much more
widely considered Gaussian case. For the Poisson and Gaussian measurement models the mean of
PY |X (Y |X) is ?X, where ? ? Rm?n is the measurement matrix. For the Poisson case the mean
may be modified as ?X + ? for ?dark current? ? ? Rm
+ , and positivity constraints are imposed on
the elements of ? and X.
PL
Often the source statistics are characterized as a mixture model: PX (X) = c=1 ?c PX|C (X|C =
PL
c), where ?c > 0 and c=1 ?c = 1, and C may correspond to a latent class label. In this context,
for each draw X there is a latent class random variable C ? {1, . . . , L}, where the probability of
class c is ?c .
Our goal is to design ? such that the observed Y is most informative about the underlying X or C.
When the interest is in recovering X, we design ? with the goal of maximizing mutual information
I(X; Y ), while when interested in inferring C we design ? with the goal of maximizing I(C; Y ).
To motivate use of the mutual information as the design metric, we note several results from the
literature. For the case in which we are interested in recovering X from Y , it has been shown [19]
that
1
MMSE ?
exp{2[h(X) ? I(X; Y )]}
(1)
2?e
where h(X) is the differential entropy of X and MMSE = E{trace[(X ? E(X|Y ))(X ?
E(X|Y ))T ]} is the minimum mean-square error.
R
For the classification problem, we define the Bayesian classification error as Pe = PY (y)[1 ?
maxc PC|Y (c|y)]dy. It has been shown in [14] that
1
[H(C|Y ) ? H(Pe )]/ log L ? Pe ? H(C|Y )
(2)
2
where H(C|Y ) = H(C) ? I(C; Y ), 0 ? H(Pe ) ? 1, and H(?) denotes the entropy of a discrete
random variable. By minimizing H(C|Y ) we minimize the upper bound to Pe , and since H(C) is
independent of ?, to minimize the upper bound to Pe our goal is to design ? such that I(C; Y ) is
maximized.
2.2 Existing results for Gaussian measurements
There are recent results for the gradient of mutual information for vector Gaussian measurements,
which we summarize here. Consider the case C ? PC (C), X|C ? PX|C (X|C), and Y |X ?
N (Y ; ?X, ??1 ), where ? ? Rm?m is a known precision matrix. Note that PC and PX|C are
arbitrary, while PY |X = N (Y ; ?X, ??1 ) corresponds to a Gaussian measurement with mean ?X.
It has been established that the gradient of mutual information between the input and the output of
the vector Gaussian channel model obeys [17]
?? I(X; Y ) = ??E,
(3)
2
where E = E (X ? E(X|Y ))(X ? E(X|Y ))T denotes the MMSE matrix. The gradient of mutual information between the class label and the output for the vector Gaussian channel is [8]
?
?? I(C; Y ) = ??E,
(4)
? = E (E(X|Y, C) ? E(X|Y ))(E(X|Y, C) ? E(X|Y ))T denotes the equivalent MMSE
where E
matrix.
2.3 Conditional-mean estimation
Note from the above discussion that for a Gaussian measurement, ?? I(X; Y ) = E[f (X, E(X|Y ))]
and ?? I(C; Y ) = E[g(E(X|Y, C), E(X|Y ))], where f (?) and g(?) are matrix-valued functions of
the respective arguments. These results highlight the connection between the gradient of mutual
information with respect to the measurement matrix ? and conditional-mean estimation, constituted
by E(X|Y ) and E(X|Y, C). We will see below that these relationships hold as well for the vector
Poisson case, with distinct functions f?(?) and g?(?).
3
3.1
Vector Poisson Data
Model
The vector Poisson channel model is defined as
m
m
Y
Y
Pois(Y ; ?X + ?) = PY |X (Y |X) =
PYi |X (Yi |X) =
Pois (Yi ; (?X)i + ?i )
i=1
(5)
i=1
where the random vector X = (X1 , X2 , . . . , Xn ) ? Rn+ represents the channel input, the random
m?n
represents a meavector Y = (Y1 , Y2 , . . . , Ym ) ? Zm
+ represents the channel output, ? ? R+
m
surement matrix, and the vector ? = (?1 , ?2 , . . . , ?m ) ? R+ represents the dark current.
The vector Poisson channel model associated with arbitrary m and n is a generalization of the scalar
Poisson model, for which m = n = 1 [12, 1]. In the scalar case PY |X (Y |X) = Pois(Y ; ?X + ?),
where here scalar random variables X ? R+ and Y ? Z+ are associated with the input and output
of the scalar channel, respectively, ? ? R+ is a scaling factor, and ? ? R+ is associated with the
dark current.
The goal is to design ? to maximize the mutual information between X and Y . Toward that end, we
consider the gradient of mutual information with respect to ?: ?? I(X; Y ) = [?? I(X; Y )ij ],
where ?? I(X; Y )ij represents the (i, j)-th entry of the matrix ?? I(X; Y ). We also consider the gradient with respect to the vector dark current, ?? I(X; Y ) = [?? I(X; Y )i ], where
?? I(X; Y )i represents the i-th entry of the vector ?? I(X; Y ). For a mixture-model source
PL
PX (X) =
c=1 ?c PX|C=c (X|C = c), for which there is more interest in recovering C than
in recovering X, we seek ?? I(C; Y ) and ?? I(C; Y ).
3.2 Gradient of Mutual Information for Signal Recovery
In order to take full generality of the input distribution into consideration, we utilize the RadonNikodym derivatives to represent the probability measures of interests. Consider random variables
X ? Rn and Y ? Rm . Let fY? |X be the Radon-Nikodym derivative of probability measure PY? |X
with respect to an arbitrary measure QY , provided that PY? |X is absolutely continuous with respect
to QY , i.e., PY? |X QY . ? ? R is a parameter. fY? is the Radon-Nikodym derivative of the
probability measure PY? with respect to QY provided that PY? QY . Note that in the continuous or
discrete case, fY? |X and fY? are simply probability density or mass functions with QY chosen to be
the Lebesgue measure or the counting measure, respectively. We note that similar notation is also
used for the signal classification case, except that we may also need to condition both on X and C.
Some results of the paper require the assumption on the regularity conditions (RC), which are listed
in the Supplementary Material. We will assume all four regularity conditions RC1?RC4 whenever
necessary in the proof and the statement of the results. Recall
f (x, ?) :
R [9] that for a Rfunction
?
?
Rn ? R ? R with a Lebesgue measure ? on Rn , we have ??
f (x, ?)d?(x) = ??
f (x, ?)d?(x),
if f (x, ?) ? g(x), where g ? L1 (?). Hence, in light of this criterion, it is straightforward to
verify that the RC are valid for many common distributions of X. Proofs of the below theorems are
provided in the Supplementary Material.
3
Theorem 1. Consider the vector Poisson channel model in (5). The gradient of mutual information
between the input and output of the channel, with respect to the matrix ?, is given by:
[?? I(X; Y )ij ] = E [Xj log((?X)i + ?i )] ? E [E[Xj |Y ] log E[(?X)i + ?i |Y ]] ,
(6)
and with respect to the dark current is given by:
[?? I(X; Y )i ] = E[log((?X)i + ?i )] ? E[log E[(?X)i + ?i |Y ]] .
(7)
irrespective of the input distribution PX (X), provided that the regularity conditions hold.
3.3
Gradient of Mutual Information for Classification
Theorem 2. Consider the vector Poisson channel model in (5) and mixture signal model. The
gradient with respect to ? of mutual information between the class label and output of the channel
is
E[(?X)i + ?i |Y, C]
[?? I(C; Y )ij ] = E E[Xj |Y, C] log
,
(8)
E[(?X)i + ?i |Y ]
and with respect to the dark current is given by
E[(?X)i + ?i |Y, C]
(?? I(C; Y ))i =E log
.
(9)
E[(?X)i + ?i |Y ]
irrespective of the input distribution PX|C (X|C), provided that the regularity conditions hold.
3.4
Relationship to known scalar results
It is clear that Theorem 1 represents a multi-dimensional generalization of Theorems 1 and 2 in [12].
The scalar result follows immediately from the vector counterpart by taking m = n = 1.
Corollary 1. For the scalar Poisson channel model PY |X (Y |X) = Pois(Y ; ?X + ?), we have
?
I(X; Y ) = E [X log((?X) + ?)] ? E [E[X|Y ] log E[?X + ?|Y ]] ,
??
?
I(X; Y ) = E[log(?X + ?)] ? E[log E[?X + ?|Y ]].
??
irrespective of the input distribution PX (X), provided that the regularity conditions hold.
(10)
(11)
While the scalar result in [12] for signal recovery is obtained as a special case of our Theorem 1, for
recovery of the class label C there are no previous results for our Theorem 2, even in the scalar case.
3.5
Conditional mean and generalized Bregman divergence
Considering the results in Theorem 1, and recognizing that E[(?X) + ?|Y ] = ?E(X|Y ) + ?, it
is clear that for the Poisson case ?? I(X; Y ) = E[f?(X, E(X|Y ))]. Similarly, for the classification
case, ?? I(C; Y ) = E[?
g (E(X|Y, C), E(X|Y ))]. The gradient with respect to the dark current ?
has no analog for the Gaussian case, but similarly we have ?? I(X; Y ) = E[f?1 (X, E(X|Y ))] and
?? I(C; Y ) = E[?
g1 (E(X|Y, C), E(X|Y ))].
?
I(X; Y ) =
For the scalar Poisson channel in Corollary 1, it has been shown in [1] that ??
E[`(X, E(X|Y ))], where `(X, E(X|Y )) is defined by the right side of (10), and is related to the
Bregman divergence [5, 2].
While beyond the scope of this paper, one may show that f?(X, E(X|Y )) and
g?(E(X|Y, C), E(X|Y )) may be interpreted as generalized Bregman divergences, where here
the generalization is manifested by the fact that these are matrix-valued measures, rather than the
scalar one in [1]. Further, for the vector Gaussian cases one may also show that f (X, E(X|Y ))
and g(E(X|Y, C), E(X|Y )) are also generalized Bregman divergences. These facts are primarily
of theoretical interest, as they do not affect the way we perform computations. Nevertheless,
these theoretical results, through generalized Bregman divergence, underscore the primacy the
conditional mean estimators E(X|Y ) and E(X|Y, C) within the gradient of mutual information
with respect to ?, for both the Gaussian and Poisson vector measurement models.
4
4
Applications
4.1 Topic Models
Consider the case for which the Poisson rate vector for document d may be represented Xd = ?Sd ,
where Xd ? Rn+ , ? ? Rn?T
and Sd ? RT+ . Here T represents the number of topics, and in
+
the context of documents, n represents the total number of words in dictionary D. The count for
the number of times each of the n words is manifested in document d may often be modeled as
Yd |Sd ? Pois(Yd ; ?Sd ); see [26] and the extensive set of references therein.
Per?Document K?L Divergence
Per?word Predictive Log?Likelihood
Rather than counting the number of
times each of the n words are sepa20Newsgroups: KL?Divergence on Topic Mixture Estimates
20 Newsgroups: PLL of Hold?out Set
rately manifested, we may more ef?7.5
Random
2.4
Rand?Ortho
ficiently count the number of times
2.2
NNMF
words in particular subsets of D are
LDA
2
Optimized
?8
manifested. Specifically, consider a
1.8
compressive measurement for docu1.6
Random
ment d, as Yd |Xd ? Pois(Yd ; ?Xd ),
1.4
?8.5
Ortho
NNMF
1.2
where ? ? {0, 1}m?n , with m
LDA
1
Optimized
n. Let ?k ? {0, 1}n represent
Full
0.8
?9
the kth row of ?, with Ydk the kth
0
50
100
150
20
40
60
80
100 120 140
Number of Projections
Number of Projections
component of Yd . Then Ydk |Xd ?
(a)
(b)
Pois(Ydk ; ?Tk Xd ) is equal in distriPn ?
Figure
1:
Results
on
the
20
Newsgroups
dataset.
Random denotes
bution to Ydk =
i=1 Ydki , where a random binary matrix with 1% non-zero values. Rand-Ortho deY?dki |Xdi ? Pois(?ki Xdi ), with notes a random binary matrix restricted to an orthogonal matrix
?ki ? {0, 1} the ith component of with one non-zero entry per column. Optimized denotes the meth?k and Xdi the ith component of Xd . ods discussed in Section 4.3. Full denotes when each word is obTherefore, Ydk represents the number served. The error estimates were obtained by running the algorithm
of times words in the set defined by over 10 different random splits of the corpus. (a) Per-word predicthe non-zero elements of ?k are man- tive log-likelihood estimate versus the number of projections. (b)
ifested in document d; Yd therefore KL Divergence versus the number of projections.
represents the number of times words are manifested in a document in m distinct sets.
Our goal is to use the theory developed above to design the binary ? such that the compressive
Yd |Xd ? Pois(Yd ; ?Xd ) is as informative as possible. In our experiments we assume that ? may
be learned separately based upon a small subset of the corpus, and then with ? so fixed the statistics
of Xd are driven by the statistics of Sd . When performing learning of ?, each column of ? is
assumed drawn from an n-dimensional Dirichlet distribution, and Sd is assumed drawn from a
gamma process, as specified in [26]. We employ variational Bayesian (VB) inference on this model
[26] to estimate ? (and retain the mean).
With ? so fixed, we then design ? under two cases. For the case in which we are interested in
inferring Sd from the compressive measurements, i.e., based on counts of words in sets, we employ
a gamma process prior for pS (Sd ), as in [26]. The result in Theorem 1 is then used to perform
gradients for design of ?. For the classification case, for each document class c ? {1, . . . , L} we
learn a p(Sd |C) based on a training sub-corpus for class C. This is done for all document classes,
and we design a compressive matrix ? ? {0, 1}m?n , with gradient performed using Theorem 2.
In the testing phase, using held-out documents, we employ the matrix ? to group the counts of
words in document d into counts on m sets of words, with sets defined by the rows of ?. Using
these Yd , which we assume are drawn Yd |Sd ? Pois(Yd ; ??Sd ), for known ? and ?, we then use
VB computations for the model in [26] to infer a posterior distribution on Sd or class C, depending
on the application. The VB inference for this model was not considered in [26], and the update
equations are presented in the Supplementary Material.
4.2 Model for Chemical Sensing
The model employed for the chemical sensing [25] considered below is very similar in form to that
used for topic modeling, so we reuse notation. Assume that there are T fundamental (building-block)
chemicals of interest, and that the hyperspectral sensor performs measurements at n wavelengths.
Then the observed data for sample d may be represented Yd |Sd ? Pois(Yd ; ?Sd + ?), where Yd ?
Zn+ represents the count of photons at the n sensor wavelengths, ? ? Rn+ represents the sensor
dark current, and the tth column of ? ? Rn?T
reflects the mean Poisson rate for chemical t (the
+
5
NYTimes: KL?Divergence on Topic Mixture Estimates
?7.4
?7.6
?7.8
Random
Ortho
NNMF
LDA
Optimized
Full
?8
?8.2
?8.4
20
40
60
80
100
120
Number of Projections
(a)
140
Per?Document K?L Divergence
Per?word Predictive Log?Likelihood
Random
Rand?Ortho
NNMF
LDA
Optimized
2.4
2.2
2
1.8
1.6
1.4
1.2
NYTimes: Predictive Log?Likelihood vs Time
?7.5
Holdout Per?Word PLL
NYTimes: PLL of Hold?out Set
?7.2
?8
?8.5
Random
Rand?Ortho
NNMF
LDA
Optimized
1
0.8
0
50
100
Number of Projections
(b)
150
?9
0
0.5
1
1.5
Per?Document Processing Time, ms
2
(c)
Figure 2: Results on the NYTimes corpus. Optimized denotes the methods discussed in Section 4.3. Full
denotes when each word is observed. The error estimates were obtained by running the algorithm over 10
different random subsets of 20,000 documents. (a) Predictive log-likelihood estimate versus the number of projections. (b) KL Divergence versus the number of projections. (c) Predictive log-likelihood versus processing
time.
different chemicals play a role analogous to topics). The vector Sd ? RT+ reflects the amount of
each fundamental chemical present in the sample under test.
For the compressive chemical-sensing system discussed in Section 4.5, the measurement matrix is
again binary, ? ? {0, 1}m?n . Through calibrations and known properties of chemicals and characteristics of the camera, one may readily constitute ? and ?, and a model similar to that employed for
topic modeling is utilized to model Sd ; here ? is a characteristic of the camera, and is not optimized.
In the experiments reported below the analysis of the chemical-sensing data is performed analogously to how the documents were modeled (which we detail), and therefore no further modeling
details are provided explicitly for the chemical-sensing application, for brevity. For the chemical
sensing application, the goal is to classify the chemical sample under test, and therefore ? is defined
based on optimization using the Theorem 2 gradient.
4.3 Details on Designing ?
We wish to use Theorems 1 and 2 to design a binary ?, for the document-analysis and chemicalsensing applications. To do this, instead of directly optimizing ?, we put a logistic link on each
value ?ij = logit(Mij ). We can state the gradient with respect to M as:
[?M I(X; Y )ij ] = [?? I(X; Y )ij ][?M ?ij ]
(12)
Similar results hold for ?M I(C; Y )ij .? was initialized at random, and we threshold the logistic at
0.5 to get the final binary ?.
To estimate the expectations needed for the results in Theorems 1 and 2, we used Monte Carlo
integration methods, where we simulated X and Y from the appropriate distribution. The number
of samples in the Monte Carlo integration was set to n (data dimension), and 1000 gradient steps
were used for optimizing ?.
The explicit forms for the gradients in Theorems 1 and 2 play an important role in making optimization of ? tractable for the practical applications considered here. One could in principle take a
brute-force gradient of I(Y ; X) and I(Y ; C) with respect to ?, and evaluate all needed integrals via
Monte Carlo sampling. This leads to a cumbersome set of terms that need be computed. The ?clean?
forms of the gradients in Theorems 1 and 2 significantly simplified design implementation within
the below experiments, with the added value of allowing connections to be made to the Gaussian
measurement model.
4.4 Examples for Document Corpora
We demonstrate designed projections on the NYTimes and 20 Newsgroups data. The NYTimes data
has n = 8000 unique words, and the Newsgroup data has n = 8052 unique words. When learning
?, we placed the prior Dir(0.1, . . . , 0.1) on the columns of ?, and the components Sdk had a prior
Gamma(0.1, 0.1). We tried many different settings for these priors, and as in [26], the learned ?
was insensitive to ?reasonable? settings. The number of topics (columns) in ? was set to T = 100.
In addition to designing ? using the proposed theory, we also considered four comparative designs:
(i) binary ? constituted uniformly at random, with 1% of the entries non-zero; (ii) orthogonal
binary rows of ?, with one non-zero element in each column selected uniformly at random; (iii)
performing non-negative matrix factorization [3] on (NNMF) ?, and projecting onto the principal
vectors; and (iv) performing latent Dirichlet allocation [4] on the documents, and projecting onto
the topic-dependent probabilities of words. For (iii) and (iv), the top (highest amplitude) 5% of
6
Confusion Matrix for Projected (N=150) Counts
Confusion Matrix for Fully Observed Word Counts
6
3
5
10
0.8
comp.os.ms?windows.misc 6
66 11
2
10
6
0.6
0.5
comp.sys.ibm.pc.hardware 1
7
75
7
2
7
0.4
comp.sys.mac.hardware 2
2
9
77
0
9
comp.windows.x 10
5
1
1
78
4
Full
Random
Rand?Ortho
NNMF
LDA
Optimized
0.3
0.2
0.1
0
0
50
100
Number of Projections
(a)
150
(b)
comp.graphics 67
6
7
3
5
11
comp.os.ms?windows.misc 9
56
15
1
13
6
comp.sys.ibm.pc.hardware 2
8
73
8
1
8
0.4
comp.sys.mac.hardware 3
3
14
70
1
9
0.2
comp.windows.x 9
7
2
0
75
7
0.6
0.4
0.2
0
m
p.
os c alt
.m om .a
m
s? p the
p.
w .gr ism
in a
co sys
m .ib dow phic
p. m
s. s
sy .p
m
s. c.h
m a isc
co ac. rdw
m ha a
p. rd re
w w
in a
do re
w
s
O .x
th
er
0.6
5
0
co
comp.graphics 72
co
0.7
co
m
p
co .os c alt
m .m om .at
p. s p he
s ?
co ys wi .gra ism
m .ib nd p
p. m ow hic
sy .p
s. c.h s.m s
m a is
co ac. rdw c
m ha ar
p. rd e
w w
in ar
do e
w
s
O .x
th
er
Hold?out Classification Accuracy
20 Newsgroups: Classification Accuracy
0.8
(c)
Figure 3: (a) Classification accuracy of projected measurements and the fully observed case. Random uses
10% non-zero values, Ortho is a random matrix limited to orthogonal projections, and Optimized uses designed
projections. The error bars are the standard deviation of the algorithm run independently on 10 random splits of
the dataset. (b) Subset of confusion matrix of of the fully observed counts. White numbers denote percentage
of documents classified in that manner. Only those classes in the ?comp? subgroup are shown. The ?comp?
group is the least accurate subgroup. (c) The confusion matrix on the ?comp? subgroup for 150 compressive
measurements.
the words in each vector on which we project (e.g., topic) were set to have projection amplitude 1,
and all the rest were set to zero. The settings on (i), (iii) and (iv), i.e., with regard to the fraction
of words with non-zero values in ?, were those that yielded the best results (other settings often
performed much worse).
We show results using two metrics, Kullback-Leibler (KL) divergence and predictive log-likelihood.
For the KL divergence, we compare the topic mixture learned from the projection measurements to
the topic mixture learned from the case where each word is observed (no compressive measurement).
0
0
We define the topic mixture Sd0 as the normalized version of Sd . We calculate DKL (Sd,p
||Sd,f
)=
PK
0
0
0
0
k=1 Sdk,p log(Sdk,p /Sdk,f ), where Sdk,p is the relative weight on document d, topic k for the
0
full set of words, and Sdk,p
is the same for the compressive topic model. We also calculate perword predictive log-likelihood. Because different projection metrics are in different dimensions,
we use 75% of a document?s words to get the projection measurements Yd and use the remaining
25% as the original word tokens Wd . We then calculate the predictive log-likelihood (PLL) as
log(Wd |?, ?, Yd ).
We split the 20 Newgroups corpus into 10 random splits of 60% training and 40% testing to get an
estimate of uncertainty. The results are shown in Figure 1. Figure 1(a) shows the per-word predictive log-likelihood (PLL). At very low numbers of compressive measurements we get similar PLL
between the designed matrix and the random methods. As we increase the number of measurements,
we get dramatic improvements by optimizing the sensing matrix and the optimized methods quickly
approach the fully observed case. The same trends can be seen in the KL divergence shown in Figure
1(b). Note that the relative quality of the NNMF and LDA based designs of ? depends on the metric
(KL or PLL), but for both metrics the proposed mutual-information-based design of ? yields best
performance.
To test the NYTimes corpus, we split the corpus into 10 random subsets with 20,000 training documents and 20,000 testing documents. The results are shown in Figure 2. As in the 20 Newsgroups
results, the predictive log-likelihood and KL divergence of the random and designed measurements
are similar when the number of projections are low. As we increase the number of projections the
optimized projection matrix offers dramatic improvements over the random methods. We also consider predictive log-likelihood versus time in Figure 2(c). The compressive measurements give near
the same performance with half the per-document processing time. Since the total processing time
increases linearly with the total number of documents, a 50% decrease in processing time can make
a significant difference in large corpora.
We also consider the classification problem over the 20 classes in the 20 Newsgroups dataset, split
into 10 groups of 60% training and 40% testing. We learn a ? with T = 20 columns (topics) and
with the prior on the columns as above. Within the prior, we draw Sdcd |cd ? Gamma(1, 1) and
Sdc0 |cd = 0 for all c0 6= cd . Separate topics are associated with each of the 20 classes, and we use
the MAP estimate to get the class label c?d = arg max(c|Yd ). Classification versus number of projections for random projections and designed projections are shown in Figure 3(a). It is also useful
to look at the type of errors made in the classifier when we use the designed projections. Figure
3(b) and Figure 3(c) show the newsgroups under the ?comp? (computer) heading, which is the least
7
accurate section. In the compressed case, many of the additional errors go into nearby topics with
overlapping ideas. For example, most additional misclassifications in ?comp.os.ms-windows.misc?
go into ?comp.sys.ibm.pc.hardware? and ?comp.windows.x,? which have many similar discussions.
Additionally, 4% of the articles were originally posted in more than one topic, showing the intimate
relationship between similar discussion groups, and so misclassifying into a related (and overlapping) class is less of a problem than misclassification into a completely disjoint class.
4.5 Poisson Compressive Sensing for Chemical Classification
We consider chemical sensing based on the wavelength-dependent signature of chemicals, at optical
frequencies (here we consider a 850-1000 nm laser system). In Figure 4(a) the measurement system
is summarized; details of this system are described in [25]. In Part 1 of Figure 4(a) multi-wavelength
photons are scattered off a chemical sample. In Part 2 of this figure a volume holographic grating
(VHG) is employed to diffract the photons in a wavelength-dependent manner, and therefore photons are distributed spatially across a digital mirror microdevice (DMD); distinct wavelengths are
associated with each micromirror. The DMD consists of 1920 ? 1080 aluminum mirrors. Each mirror is in a binary state, either reflecting light back to a detector, or not. Each mirror approximately
samples a single wavelength, as a result of the VHG, and the photon counter counts all photons at
wavelengths for which the mirrors direct light to the sensor. Hence, the sensor counts all photons at
a subset of the wavelengths, those for which the mirror is at the appropriate angle.
The measurement may be represented Y |Sd ? Pois[?(?Sd + ?0 )],
Classification of 10 Chemicals
where ?0 ? Rn+ is known from calibration. The elements of the rate vector of ?0 vary from .07 to 1.5 per bin,
and the cumulative dark current ??0
can provide in excess of 50% of the
signal energy, depending on the measurement (very noisy measurements).
Design of ? was based on Theorem
2, and ?0 here is treated as the sigNumber of Measurements
nature of an additional chemical (ac(a)
(b)
tually associated with measurement
are set in unison. The DMD is mounted at an angle such that the
As for which vector we should use in (7), we believe that a
noise); finally, ? = ??0 is the practical
mea4: that(a)
system.
The
?12 mirror position directs photons
back with a vertical
offset VHG is a volume holoset of ?lters F Figure
can be designed assuming
the pureMeasurement
of ?1 below the incident light in order to spatially separate the
component emission rates are normalized to the same value,
graphic grating, that
spreads
photons in a wavelengthsurement dark current.
incident andspatially
re?ected photons. The latter
photons are recombined
D.S. Wilcox et al. / Analytica Chimica Acta 755 (2012) 17?27
21
1
0.9
Accuracy
0.8
0.7
0.6
0.5
0.4
Exp?Designed
Exp?Random
Sim?Designed
Sim?Random
0.3
0.2
1
2
3
4
5
Fig. 1. Schematic of the DMD-based near infrared digital compressive detection instrument.
?
?
i = j
(8)
dependent manner across the digital mirror microdevice (DMD),
in a second pass through the holographic grating, and focused
onto a ?ber optic cable that is connected to a photodiode photon
counting module (PerkinElmer, SPCMCD2969PE). The photon
counting module has a dark count rate of ?200 photons s?1 and
no read noise. A TTL pulse is output by the photon counter as
each photon is detected, and the pulses are counted in a USB data
acquisition (DAQ) card (National Instruments, USB-6212BNC).
Integration timing is controlled by setting the sampling rate and
number of samples to acquire with the DAQ card in Labview 2009.
Binary ?lter functions (F), optimal times (T), and the estimator
(B) were generated from the spectra of all pure components (see
Section 3.2 for more information) using functions from Matlab 7.13
R2011b. The input binary optical ?lter function determined which
mirrors will point toward the detector (assigned a value of 1) or
point away (assigned a value of 0). The binary (0?1) mathematical
?lters are con?gured to the DMD through Labview software (Texas
Instruments, DDC4100, Load Blocks.vi) that sets blocks of mirrors
on the DMD array corresponding to different wavelengths to the
appropriate ?12? position. Labview scripts were used to sequentially apply the ?lters and integrate for the corresponding times, to
store the raw photon counts, and to calculate the photon rates. Linear and quadratic discriminant analyses were performed in Matlab
7.13 R2011b. Data was further processed and plotted in Igor Pro
6.04.
i and j, i.e., we design measurement ?lters F to minfor allthis
The ten chemicals considered in
anda the
DMD
imize the error in estimating
mixture where
the rate is
of employed to implement binary coding. (b) Perphotons
emitted by all chemical species are the same. Setting
test were acetone, acetonitrile, =benformance
of
the
compressive-measurement classifier as a function
(1, 1, . . . , 1) suf?ces. This determines A = F P, B, and T. Maton request. Seeof compressive measurements; ten chemicals are
lab software to determine
zene, dimethylacetamide, dioxane,
ofOB ?lters
theis available
number
www.math.purdue.edu/?buzzard/software/ for more details.
ethanol, hexane, methylcyclohexane, considered. Experimental results are shown (Exp), as well as preoctane, and toluene, and we 3. Experimental
note dictions from simulations (Sim).
3.1. Experimental apparatus
from Figure 4 that after only five compressive measurements excellent chemical classification is
The compressive detection spectrometer, shown in Fig. 1,
manifested based on designed CS
There
are n > 1000 wavelengths in a convenRaman backscattering collection geometry. Part
1 is
employs a measurements.
similar to that described in [2]. The excitation source is a 785 nm
tional measurement of these data,
this
system
therefore
laser (Innovative
Photonic Solutions).
After passing reflecting significant compression. In Figure
single mode
through a laser-line bandpass ?lter (Semrock, LL01-785-12.5), the
4(b) we show results of measured
data
and
performance
predictions based on our model, with good
laser is focused
onto the
sample with
a NIR lens (Olympus, LMPlan
IR, 20?). The Raman scattering is collected and separated from
agreement manifested. Note that
designed
Rayleigh scattering withprojection
a dichroic mirror (Semrock, measurements perform markedly better than
the laser
LPD01-785RS-25) and a 785 nm notch ?lter (Semrock, NF03-785Erandom, where here the probability
of a one in the random design was 10% (this yielded best random
25).
The Raman scattered light is then sent to Part 2, where it is ?rst
results in simulations).
?ltered with a 900 nm shortpass ?lter (Thorlabs, FES0900) and subT
5
Conclusions
T
sequently directed to a volume holographic grating (1200 L mm?1 ,
center wavelength 830 nm, Edmund Optics, 48?590). The window
of the dispersed light is ?200?1700 cm?1 with a spectral resolution
of 30 cm?1 (this resolution is limited by the beam quality and
hence the image of the diode laser focal spot size, which spans
approximately 15 mirrors on the surface of the DMD). The light is
collimated with an achromatic lens with a focal length of f = 50 mm
(Thorlabs, AC254-050-B) and focused onto the DMD (Texas Instruments, DLP Discovery 4000). The DMD consists of 1920 ? 1080
?
aluminum mirrors (10.8 ?m pitch) that can tilt ?12 relative to
the ?at state of the array, controlled by an interface card (DLP
D4000, Texas Instruments). All 1080 mirrors in each rows of the
array are set to the same angle, and the 1920 columns are divided
into adjacent groupings ? e.g., if we want to divide the energy of
the photons into 128 ?bins?, then groups of 15 adjacent columns
3.2. Constructing ?lters
Generating accurate ?lters for a given application requires high
New results are presented for the gradient of mutual information
with respect to the measurement
signal-to-noise training spectra of each of the components of interest. Measuring full spectra with the DMD is achieved by notch
matrix and a dark current, within the context of a Poisson
model
for
vector count data. The mutual
scanning. This is done by sequentially directing one mirror (or
a small set of mirrors) toward the detector (with all other mirinformation is considered for signal recovery and classification.
For
the former we recover known
rors directed away) and counting the number of photons detected
at each notch position. Notch scanning measurements were perscalar results as a special case, and the latter results for classification
have not been addressed in any
formed using 1 s per notch to obtain spectra with a signal-to-noise
of ?500:1. A background spectrum is present in all of our trainform previously. Fundamental connections between theingratiospectra,
gradient
of mutual information and condiarising from the interaction of the excitation laser and
the intervening optical elements. We have implemented two comtional expectation estimates have been made for the Poisson
model.
Encouraging applications have
pressive detection strategies for removing this background. The
?rst method
involvescompressive
measuring the background (with no sample)
been demonstrated for compressive topic modeling, and
for
hyperspectral chemical
sensing (with demonstration on a real compressive camera).
Acknowledgments
The work reported here was supported in part by grants from ARO, DARPA, DOE, NGA and ONR.
8
References
[1] R. Atar and T. Weissman. Mutual information, relative entropy, and estimation in the Poisson channel.
IEEE Transactions on Information Theory, 58(3):1302?1318, March 2012.
[2] A. Banerjee, S. Merugu, I.S. Dhillon, and J. Ghosh. Clustering with bregman divergences. JMLR, 2005.
[3] M.W Berry, M. Browne, A.N. Langville, V.P. Pauca, and R. J. Plemmons. Algorithms and applications
for approximate nonnegative matrix factorization. Computational Statistics & Data Analysis, 2007.
[4] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 2003.
[5] L.M. Bregman. The relaxation method of finding the common point of convex sets and its application to
the solution of problems in convex programming. USSR computational mathematics and mathematical
physics, 1967.
[6] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. on Inform. Theory, 2006.
[7] W.R. Carson, M. Chen, M.R.D. Rodrigues, R. Calderbank, and L. Carin. Communications-inspired projection design with application to compressive sensing. SIAM J. Imaging Sciences, 2013.
[8] M. Chen, W. Carson, M. Rodrigues, R. Calderbank, and L. Carin. Communications inspired linear discriminant analysis. In ICML, 2012.
[9] G.B. Folland. Real Analysis: Modern Techniques and Their Applications. Wiley New York, 1999.
[10] D. Guo. Information and estimation over binomial and negative binomial models. arXiv preprint
arXiv:1207.7144, 2012.
[11] D. Guo, S. Shamai, and S. Verd?u. Mutual information and minimum mean-square error in Gaussian
channels. IEEE Transactions on Information Theory, 51(4):1261?1282, April 2005.
[12] D. Guo, S. Shamai, and S. Verd?u. Mutual information and conditional mean estimation in Poisson channels. IEEE Transactions on Information Theory, 54(5):1837?1849, May 2008.
[13] S.M. Haas and J.H. Shapiro. Capacity of wireless optical communications. IEEE Journal on Selected
Areas in Communications, 21(8):1346?1357, Aug. 2003.
[14] M. Hellman and J. Raviv. Probability of error, equivocation, and the Chernoff bound. IEEE Transactions
on Information Theory, 1970.
[15] A. Lapidoth and S. Shamai. The poisson multiple-access channel. IEEE Transactions on Information
Theory, 44(2):488?501, Feb. 1998.
[16] R.S. Liptser and A.N. Shiryaev. Statistics of Random Processes: II. Applications, volume 2. Springer,
2000.
[17] D.P. Palomar and S. Verd?u. Gradient of mutual information in linear vector Gaussian channels. IEEE
Transactions on Information Theory, 52(1):141?154, Jan. 2006.
[18] D.P. Palomar and S. Verd?u. Representation of mutual information via input estimates. IEEE Transactions
on Information Theory, 53(2):453?470, Feb. 2007.
[19] S. Prasad.
Certain relations between mutual information and fidelity of statistical estimation.
http://arxiv.org/pdf/1010.1508v1.pdf, 2012.
[20] M. Raginsky, R.M. Willett, Z.T. Harmany, and R.F. Marcia. Compressed sensing performance bounds
under poisson noise. IEEE Trans. Signal Processing, 2010.
[21] M. Seeger, H. Nickisch, R. Pohmann, and B. Schoelkopf. Optimization of k-space trajectories for compressed sensing by bayesian experimental design. Magnetic Resonance in Medicine, 2010.
[22] C.G. Taborda and F. Perez-Cruz. Mutual information and relative entropy over the binomial and negative
binomial channels. In IEEE International Symposium on Information Theory Proceedings (ISIT), pages
696?700. IEEE, 2012.
[23] S. Verd?u. Mismatched estimation and relative entropy. IEEE Transactions on Information Theory,
56(8):3712?3720, Aug. 2010.
[24] T. Weissman. The relationship between causal and noncausal mismatched estimation in continuous-time
awgn channels. IEEE Transactions on Information Theory, 2010.
[25] D.S. Wilcox, G.T. Buzzard, B.J. Lucier, P. Wang, and D. Ben-Amotz. Photon level chemical classification
using digital compressive detection. Analytica Chimica Acta, 2012.
[26] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. AISTATS, 2012.
9
| 5091 |@word version:1 compression:1 logit:1 nd:1 c0:1 seek:1 tried:1 pulse:2 simulation:2 r:1 prasad:1 dramatic:2 document:26 mmse:7 existing:2 recovered:1 current:11 od:1 wd:2 readily:1 cruz:1 informative:2 analytic:1 shamai:3 designed:15 update:1 v:1 half:1 selected:2 harmany:1 sys:6 ith:2 blei:1 math:1 org:1 five:1 rc:2 mathematical:2 ethanol:1 direct:2 differential:1 symposium:1 beta:1 consists:2 manner:3 cand:1 multi:2 plemmons:1 inspired:3 encouraging:1 window:7 considering:2 increasing:1 provided:7 project:1 underlying:1 notation:2 estimating:1 mass:1 hic:1 ttl:1 cm:2 interpreted:1 substantially:1 developed:2 compressive:28 ghosh:1 finding:1 xd:10 rm:5 classifier:2 uk:1 brute:1 grant:1 engineering:2 timing:1 sd:21 apparatus:1 awgn:1 yd:17 approximately:2 therein:1 studied:1 acta:2 co:8 factorization:2 limited:2 obeys:1 directed:2 practical:2 camera:4 unique:2 testing:4 acknowledgment:1 block:3 implement:1 analytica:2 spot:1 lcarin:1 jan:1 area:1 significantly:1 projection:27 word:27 get:6 onto:5 close:1 romberg:1 put:1 context:5 py:13 www:1 equivalent:1 imposed:1 demonstrated:4 map:1 maximizing:3 pursues:1 straightforward:1 overarching:1 go:2 independently:1 center:1 focused:3 pyi:1 resolution:2 convex:2 recovery:6 immediately:1 pure:1 estimator:2 array:3 sequently:1 ortho:8 analogous:1 play:2 palomar:2 exact:1 duke:2 programming:1 us:2 rodrigues:4 designing:2 verd:5 agreement:1 element:5 trend:1 expensive:1 utilized:1 photodiode:1 infrared:1 observed:9 role:2 module:2 preprint:1 wang:2 electrical:2 spectrometer:1 calculate:4 schoelkopf:1 connected:1 surement:1 highest:1 decrease:1 counter:2 nytimes:7 signature:1 motivate:2 predictive:11 upon:1 completely:1 darpa:1 represented:4 laser:7 separated:1 distinct:3 london:1 monte:3 detected:2 widely:2 valued:2 supplementary:3 compressed:3 achromatic:1 statistic:5 g1:1 itself:1 noisy:1 final:1 ucl:1 aro:1 ment:1 interaction:1 reconstruction:1 zm:2 relevant:1 intervening:1 pll:7 rst:2 regularity:5 p:1 comparative:1 generating:1 raviv:1 ben:1 tk:1 depending:2 ac:4 miguel:1 measured:1 ij:9 pauca:1 aug:2 sim:3 grating:4 recovering:4 c:3 diode:1 implemented:1 rately:1 aluminum:2 nnmf:8 material:3 bin:2 require:1 generalization:3 investigation:1 recombined:1 isit:1 exploring:1 extension:1 pl:3 hold:8 mm:2 considered:10 exp:4 lawrence:1 scope:1 dictionary:1 vary:1 estimation:10 label:6 collimated:1 reflects:2 sensor:5 gaussian:22 modified:1 rather:2 zhou:1 poi:12 corollary:2 focus:1 emission:1 improvement:2 directs:1 likelihood:12 underscore:1 seeger:1 inference:2 tional:1 dependent:4 relation:1 dlp:2 interested:4 tao:1 arg:1 classification:22 fidelity:1 ussr:1 rc1:1 resonance:1 special:4 integration:3 mutual:35 equal:1 ng:1 sampling:2 chernoff:1 represents:14 look:1 icml:1 carin:4 igor:1 primarily:1 employ:4 modern:1 randomly:1 gamma:4 divergence:16 national:1 phase:1 geometry:1 lebesgue:2 detection:4 interest:8 highly:1 unison:1 mixture:9 pc:6 light:7 perez:1 held:1 isc:1 accurate:3 noncausal:1 bregman:7 integral:1 necessary:1 respective:1 orthogonal:3 incomplete:1 iv:3 divide:1 initialized:1 re:3 plotted:1 causal:1 theoretical:2 column:10 modeling:6 classify:1 newgroups:1 ar:2 measuring:2 zn:1 mac:2 deviation:1 subset:6 entry:4 holographic:3 recognizing:1 gr:1 graphic:3 xdi:3 acetone:1 reported:2 scanning:2 dir:1 nickisch:1 density:3 fundamental:3 siam:1 international:1 retain:1 off:1 physic:1 ym:1 analogously:1 quickly:1 squared:1 again:1 nm:5 positivity:1 worse:1 derivative:5 photon:23 chemistry:1 summarized:1 coding:1 explicitly:1 depends:1 vi:1 performed:6 script:1 lab:1 bution:1 recover:1 parallel:1 langville:1 contribution:1 minimize:2 formed:1 ir:1 square:2 om:2 accuracy:4 characteristic:2 merugu:1 maximized:1 yield:2 correspond:1 sy:2 bayesian:3 raw:1 carlo:3 equivocation:1 served:1 comp:17 usb:2 trajectory:1 classified:1 maxc:1 detector:3 inform:1 cumbersome:1 whenever:1 energy:2 acquisition:1 frequency:2 associated:7 proof:2 con:1 dataset:3 holdout:1 recall:1 knowledge:1 lucier:1 amplitude:2 reflecting:2 back:2 scattering:2 originally:1 improved:1 rand:5 april:1 formulation:1 done:2 generality:1 dey:1 dow:1 o:5 banerjee:1 overlapping:2 logistic:2 mode:1 lda:7 quality:2 believe:1 building:1 verify:1 y2:1 normalized:2 counterpart:1 former:1 hence:3 assigned:2 chemical:26 spatially:2 read:1 leibler:1 dhillon:1 misc:3 white:1 adjacent:2 excitation:2 criterion:1 generalized:4 m:4 carson:2 pdf:2 theoretic:1 demonstrate:1 confusion:4 performs:1 interrelationship:1 l1:1 pro:1 interface:1 hellman:1 image:1 variational:1 consideration:1 ef:1 recently:2 common:2 tilt:1 insensitive:1 volume:4 extend:1 analog:1 discussed:3 he:1 willett:1 measurement:59 significant:2 rd:2 focal:2 mathematics:1 similarly:2 had:1 calibration:2 access:1 sive:1 surface:1 feb:2 posterior:1 recent:1 optimizing:4 driven:1 store:1 certain:2 manifested:7 binary:13 onr:1 yi:2 seen:1 minimum:3 additional:3 employed:4 determine:1 maximize:1 signal:14 ii:2 relates:1 full:8 multiple:1 infer:1 characterized:2 offer:1 long:1 divided:1 concerning:1 y:1 dkl:1 weissman:2 controlled:2 schematic:1 prediction:1 pitch:1 metric:6 poisson:37 expectation:2 arxiv:3 represent:2 achieved:1 qy:6 beam:1 addition:1 want:1 separately:1 background:3 addressed:2 source:4 rest:1 pohmann:1 posse:1 markedly:1 sent:2 jordan:1 emitted:1 near:2 counting:8 split:6 iii:3 newsgroups:7 xj:3 affect:1 browne:1 misclassifications:1 idea:2 texas:3 expression:1 notch:5 reuse:1 tually:1 passing:1 york:1 constitute:1 matlab:2 folland:1 useful:1 clear:2 listed:1 amount:1 dark:12 ten:2 hardware:5 processed:1 tth:1 http:1 shapiro:1 percentage:1 misclassifying:1 shiryaev:2 disjoint:1 per:12 discrete:2 group:5 key:1 four:2 nevertheless:1 threshold:1 drawn:3 clean:1 ce:1 utilize:1 lter:5 v1:1 imaging:1 chimica:2 relaxation:1 fraction:1 nga:1 run:1 angle:3 raginsky:1 uncertainty:2 gra:1 almost:1 reasonable:1 daq:2 electronic:1 draw:2 raman:2 dy:1 scaling:1 radon:2 vb:3 bound:4 ki:2 quadratic:1 yielded:2 nonnegative:1 optic:2 constraint:1 x2:1 software:3 nearby:1 ected:1 argument:1 innovative:1 span:1 performing:3 optical:5 relatively:1 px:10 department:3 request:1 march:1 across:2 wi:1 cable:1 making:1 projecting:2 restricted:1 computationally:1 equation:1 previously:1 count:17 needed:2 tractable:1 instrument:5 end:1 dia:1 available:1 apply:1 away:2 appropriate:3 spectral:1 rors:1 magnetic:1 primacy:1 original:1 binomial:7 denotes:8 running:2 dirichlet:3 top:1 remaining:1 imize:1 clustering:1 carlson:2 medicine:1 ism:2 added:1 strategy:1 rt:2 gradient:28 kth:2 ow:1 link:1 separate:2 simulated:1 card:3 capacity:1 topic:22 haas:1 fy:4 vhg:3 discriminant:2 toward:3 collected:1 assuming:1 length:1 modeled:2 relationship:4 minimizing:1 acquire:1 demonstration:1 dunson:1 robert:2 statement:1 trace:1 negative:5 design:23 implementation:1 perform:3 allowing:1 upper:2 vertical:1 observation:1 photonic:1 purdue:3 extended:1 communication:5 y1:1 rn:11 discovered:1 directing:1 arbitrary:3 david:3 tive:1 kl:9 extensive:1 connection:8 optimized:12 specified:1 learned:4 established:2 subgroup:3 diction:1 trans:2 beyond:2 bar:1 below:6 summarize:1 max:1 misclassification:1 treated:1 examination:1 force:1 meth:1 irrespective:3 ltered:1 nir:1 prior:6 literature:1 discovery:1 berry:1 theis:1 relative:9 fully:4 calderbank:4 highlight:1 suf:1 allocation:2 mounted:1 versus:7 digital:4 integrate:1 incident:2 lters:7 article:1 principle:2 nikodym:2 cd:3 ibm:3 row:4 bnc:1 token:1 placed:1 supported:1 wireless:1 heading:1 side:1 ber:1 mismatched:3 taking:1 distributed:1 rdw:2 regard:1 dimension:2 xn:1 valid:1 cumulative:1 toluene:1 author:1 made:4 collection:1 projected:2 simplified:1 counted:1 transaction:9 excess:1 approximate:1 implicitly:1 kullback:1 sequentially:2 anda:1 corpus:11 assumed:2 spectrum:5 continuous:3 latent:5 additionally:1 channel:27 learn:2 nature:1 robust:1 excellent:1 posted:1 constructing:1 did:1 pk:1 spread:1 constituted:3 linearly:1 aistats:1 motivation:1 noise:5 x1:1 fig:2 scattered:2 wiley:1 precision:1 sub:1 theme:1 inferring:2 wish:1 explicit:1 position:3 bandpass:1 pe:6 ib:2 intimate:1 dmd:12 jmlr:2 hannah:1 formula:1 theorem:16 removing:1 load:1 specific:1 showing:1 er:2 sensing:15 offset:1 dki:1 alt:2 grouping:1 mirror:16 hyperspectral:4 edmund:1 chen:2 entropy:6 rayleigh:1 simply:1 wavelength:12 expressed:2 scalar:18 springer:1 mij:1 corresponds:1 sdk:6 determines:1 dispersed:1 conditional:8 goal:7 man:1 experimentally:1 specifically:1 except:1 uniformly:2 determined:1 principal:1 total:3 specie:1 pas:1 lens:2 experimental:4 e:1 newsgroup:1 college:1 guo:3 latter:2 brevity:1 sd0:1 absolutely:1 evaluate:1 wilcox:3 liming:2 |
4,522 | 5,092 | Dirty Statistical Models
Eunho Yang
Department of Computer Science
University of Texas at Austin
[email protected]
Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
[email protected]
Abstract
We provide a unified framework for the high-dimensional analysis of
?superposition-structured? or ?dirty? statistical models: where the model parameters are a superposition of structurally constrained parameters. We allow for any
number and types of structures, and any statistical model. We consider the general class of M -estimators that minimize the sum of any loss function, and an
instance of what we call a ?hybrid? regularization, that is the infimal convolution
of weighted regularization functions, one for each structural component. We provide corollaries showcasing our unified framework for varied statistical models
such as linear regression, multiple regression and principal component analysis,
over varied superposition structures.
1
Introduction
High-dimensional statistical models have been the subject of considerable focus over the past
decade, both theoretically as well as in practice. In these high-dimensional models, the ambient
dimension of the problem p may be of the same order as, or even substantially larger than the sample
size n. It has now become well understood that even in this type of high-dimensional p
n scaling, it is possible to obtain statistically consistent estimators provided one imposes structural constraints on the statistical models. Examples of such structural constraints include sparsity constraints
(e.g. compressed sensing), graph-structure (for graphical model estimation), low-rank structure (for
matrix-structured problems), and sparse additive structure (for non-parametric models), among others. For each of these structural constraints, a large body of work have proposed and analyzed
statistically consistent estimators. For instance, a key subclass leverage such structural constraints
via specific regularization functions. Examples include `1 -regularization for sparse models, nuclear
norm regularization for low-rank matrix-structured models, and so on.
A caveat to this strong line of work is that imposing such ?clean? structural constraints such as
sparsity or low-rank structure, is typically too stringent for real-world messy data. What if the
parameters are not exactly sparse, or not exactly low rank? Indeed, over the last couple of years,
there has been an emerging line of work that address this caveat by ?mixing and matching? different
structures. Chandrasekaran et al. [5] consider the problem of recovering an unknown low-rank and
an unknown sparse matrix, given the sum of the two matrices; for which they point to applications
in system identification in linear time-invariant systems, and optical imaging systems among others.
Chandrasekaran et al. [6] also apply this matrix decomposition estimation to the learning of latentvariable Gaussian graphical models, where they estimate an inverse covariance matrix that is the sum
of sparse and low-rank matrices. A number of papers have applied such decomposition estimation
to robust principal component analysis: Cand`es et al. [3] learn a covariance matrix that is the sum
of a low-rank factored matrix and a sparse ?error/outlier? matrix, while [9, 15] learn a covariance
matrix that is the sum of a low-rank matrix and a column-sparse error matrix. Hsu et al. [7] analyze
this estimation of a sum of a low-rank and elementwise sparse matrix in the noisy setting; while
Agarwal et al. [1] extend this to the sum of a low-rank matrix and a matrix with general structure.
Another application is multi-task learning, where [8] learn a multiple-linear-regression coefficient
1
matrix that is the sum of a sparse and a block-sparse matrix. This strong line of work can be seen to
follow the resume of estimating a superposition of two structures; and indeed their results show this
simple extension provides a vast increase in the practical applicability of structurally constrained
models. The statistical guarantees in these papers for the corresponding M -estimators typically
require fairly extensive technical arguments that extend the analyses of specific single-structured
regularized estimators in highly non-trivial ways.
This long-line of work above on M -estimators and analyses for specific pairs of super-position
structures for specific statistical models, lead to the question: is there a unified framework for studying any general tuple (i.e. not just a pair) of structures, for any general statistical model? This is
precisely the focus of this paper: we provide a unified framework of ?superposition-structured? or
?dirty? statistical models, with any number and any types of structures, for any statistical model.
By such ?superposition-structure,? we mean the constraint that the parameter be a superposition of
?clean? structurally constrained parameters. In addition to the motivation above, of unifying the
burgeoning list of works above, as well as to provide guarantees for many novel superpositions (of
for instance more than two structures) not yet considered in the literature; another key motivation is
to provide insights on the key ingredients characterizing the statistical guarantees for such dirty statistical models. Our unified analysis allows the following very general class of M -estimators, which
are the sum of any loss function, and an instance of what we call a ?hybrid? regularization function, that is the infimal convolution of any weighted regularization functions, one for each structural
component. As we show, this is equivalent to an M -estimator that is the sum of (a) a loss function
applied to the sum of the multiple parameter vectors, one corresponding to each structural component; and (b) a weighted sum of regularization functions, one for each of the parameter vectors. We
stress that our analysis allows for general loss functions, and general component regularization functions. We provide corollaries showcasing our unified framework for varied statistical models such as
linear regression, multiple regression and principal component analysis, over varied superposition
structures.
2
Problem Setup
We consider the following general statistical modeling setting. Consider a random variable Z with
distribution P, and suppose we are given n observations Z1n := {Z1 , . . . , Zn } drawn i.i.d. from P.
We are interested in estimating some parameter ?? 2 Rp of the distribution P. We assume that
the statistical model parameter ?? is ?superposition-structured,? so that it is the sum of parameter
components, each of which is constrained by a specific structure. For a formalization of the notion
?
of structure, we first review some terminology from [11]. There, they use subspace pairs (M, M ),
where M ? M, to capture any structured parameter. M is the model subspace that captures
?
the constraints imposed on the model parameter, and is typically low-dimensional. M is the
perturbation subspace of parameters that represents perturbations away from the model subspace.
They also define the property of decomposability of a regularization function, which captures the
suitablity of a regularization function R to particular structure. Specifically, a regularization function
?
R is said to be decomposable with respect to a subspace pair (M, M ), if
R(u + v) = R(u) + R(v),
?
for all u 2 M, v 2 M .
For any structure such as sparsity, low-rank, etc., we can define the corresponding low-dimensional
model subspaces, as well as regularization functions that are decomposable with respect to the corresponding subspace pairs.
I. Sparse vectors. Given any subset S ? {1, . . . , p} of the coordinates, let M(S) be the subspace
of vectors in Rp that have support contained in S. It can be seen that any parameter ? 2 M(S)
?
would be atmost |S|-sparse. For this case, we use M(S) = M(S), so that M (S) = M? (S). As
shown in [11], the `1 norm R(?) = k?k1 , commonly used as a sparsity-encouraging regularization
?
function, is decomposable with respect to subspace pairs (M(S), M (S)).
II. Low-rank matrices. Consider the class of matrices ? 2 Rk?m that have rank r ? min{k, m}.
For any given matrix ?, we let row(?) ? Rm and col(?) ? Rk denote its row space and column
space respectively. For a given pair of r-dimensional subspaces U ? Rk and V ? Rm , we define
the subspace pairs as follows: M(U, V ) := ? 2 Rk?m | row(?) ? V, col(?) ? U and
2
?
M (U, V ) := ? 2 Rk?m | row(?) ? V ? , col(?) ? U ? . As [11] show, the nuclear norm
?
R(?) = |||?|||1 is decomposable with respect to the subspace pairs (M(U, V ), M (U, V )).
In our dirty statistical model setting, we do not just have one, but a set of structures;
P suppose we
index them by the set I. Our key structural constraint can then be stated as: ?? = ?2I ??? , where
?
??? is a ?clean? structured parameter with respect to a subspace pair (M? , M? ), for M? ? M? .
We also assume we are given a set of regularization functions R? (?), for ? 2 I that are suited to
the respective structures, in the sense that they are decomposable with respect to the subspace pairs
?
(M? , M? ).
Let L : ? ? Z n 7! R be some loss function that assigns a cost to any parameter ? 2 ? ? Rp , for a
given set of observations Z1n . For ease of notation, in the sequel, we adopt the shorthand L(?) for
L(?; Z1n ). We are interested in the following ?super-position? estimator:
?X ? X
min L
?? +
(1)
? R? (?? ),
(?? )?2I
?2I
?2I
where ( ? )?2I are the regularization penalties. This optimization problem involves not just one
parameter vector, but multiple parameter vectors, one for each structural component: while the
loss function applies only to the sum of these, separate regularization functions are applied to the
corresponding parameter vectors. We will now see that this can be re-written to a standard M estimation problem which minimizes, over a single parameter vector, the sum of a loss function and
a special ?dirty? regularization function.
Given a vector c := (c? )?2I of convex-combination weights, suppose we define the following
?dirty? regularization function, that is the infimal convolution of a set of regularization functions:
nX
o
X
R(?; c) = inf
c? R? (?? ) :
?? = ? .
(2)
?2I
?2I
It can be shown that provided the individual regularization functions R? (?), for ? 2 I, are norms,
R(?; c) is a norm as well. We discuss this and other properties of this hybrid regularization function
R(?; c) in Appendix A.
Proposition 1. Suppose (?b? )?2I is the solution to the M -estimation problem in (1). Then ?b :=
P
?b? is the solution to the following problem:
?2I
min L(?) + R(?; c),
?2?
(3)
where c? = ? / . Similarly, if ?b is the solution to (3), then there is a solution (?b? )?2I to the
P
M -estimation problem (1), such that ?b := ?2I ?b? .
Proposition 1 shows that the optimization problems (1) and (3) are equivalent. While the tuning
parameters in (1) correspond to the regularization penalties ( ? )?2I , the tuning parameters in (3)
correspond to the weights (c? )?2I specifying the ?dirty? regularization function. In our unified
analysis theorem, we will provide guidance on setting these tuning parameters as a function of
various model-parameters.
3
Error Bounds for Convex M -estimators
Our goal is to provide error bounds k?b ?? k, between the target parameter ?? , the minimizer of the
population risk, and our M -estimate ?b from (1), for any error norm k ? k. A common example of an
error norm for instance is the `2 norm k ? k2 . We now turn to the properties of the loss function and
regularization function that underlie our analysis. We first restate some natural assumptions on the
loss and regularization functions.
(C1) The loss function L is convex and differentiable.
(C2) The regularizers R? are norms, and are decomposable with respect to the subspace pairs
?
(M? , M? ), where M? ? M? .
3
Our next assumption is a restricted strong convexity assumption [11]. Specifically, we will require
the loss function L to satisfy:
(C3) (Restricted Strong Convexity) For all
parameter component ?,
L(
?; ?
?
) := L(?? +
?)
?
?
L(?? )
where ?L is a ?curvature? parameter, and
2 ?? , where ?? is the parameter space for the
r? L(?? ),
g? R2? (
?)
?
?
?L k
2
?k
g? R2? (
? ),
is a ?tolerance? parameter.
Note that these conditions (C1)-(C3) are imposed even when the model has a single clean structural
constraint; see [11]. Note that g? is usually a function on the problem size decreasing in the sample
size; in the standard Lasso with |I| = 1 for instance, g? = logn p .
Our next assumption is on the interaction between the different structured components.
(C4) (Structural Incoherence) For all ? 2 ?? ,
X
X
L ?? +
1)L(?? )
L ?? +
? + (|I|
?2I
?2I
?
?
?L X
k
2
?2I
2
?k
+
X
?2I
h? R2? (
? ).
Note that for a model with a single clean structural constraint, with |I| = 1, the condition (C4) is
trivially satisfied since the LHS becomes 0. We will see in the sequel that for a large collection
of loss functions including all linear loss functions, the condition (C4) simplifies considerably, and
moreover holds with high probability, typically with h? = 0. We note that this condition is much
weaker than ?incoherence? conditions typically imposed when analyzing specific instances of such
superposition-structured models (see e.g. references in the introduction), where the assumptions
typically include (a) assuming that the structured subspaces (M? )?2I intersect only at {0}, and (b)
that the sizes of these subspaces are extremely small.
Finally, we will use the notion of subspace compatibility constant defined in [11], that captures the
relationship between the regularization function R(?) and the error norm k ? k, over vectors in the
R
subspace M: (M, k ? k) := supu2M\{0} kuk
.
Theorem 1. Suppose we solve the M -estimation problem in (3), withPhybrid regularization
R(?; c), where the convex-combination weights c are set as c? = ? / ?2I ? , with ?
2R?? r?? L(?? ; Z1n ) . Further, suppose conditions (C1) - (C4) are satisfied. Then, the parameter error bounds are given as:
?
?
p
p
3|I|
k?b ?? k ?
max ? ? (M? ) + (|I| ?L / ?
? ),
2?
? ?2I
where
?
?2
?L
1 p
32?
g 2 |I| max ? ? (M? ) , g? := max
g ? + h? ,
?
?2I
2
?
i
Xh
2 ?
?
?
?L :=
32?
g 2 2? R2? ?M?
(?
)
+
R
?
.
? (?? )
?
M
?
?
?
|I|
?
? :=
?2I
Remarks: (R1) It is instructive to compare Theorem 1 to the main Theorem in [11], where they
derive parameter error bounds for any M -estimator with a decomposable regularizer, for any
?clean? structure. Our theorem can be viewed as a generalization: we recover their theorem
when we have a single structure with |I| = 1. We cannot derive our result in turn from their
theorem applied to the M -estimator (3) with the hybrid regularization function R(?; c): the
?superposition? structure is not captured by a pair of subspaces, nor is the hybrid regularization
function decomposable, as is required by their theorem. Our setting as well as analysis is strictly
more general, because of which we needed the additional structural incoherence assumption (C4)
(which is trivially satisfied when |I| = 1).
(R2) Agarwal et al. [1] provide Frobenius norm error bounds for the matrix-decomposition problem
of recovering the sum of low-rank and a general structured matrix. In addition to the greater
generality of our theorem and framework, Theorem 1 addresses two key drawbacks of their
theorem even in their specific setting. First, the proof for their theorem requires the regularization
4
penalty for the second structure to be strongly bounded away from zero: their convergence rate
does not approach zero even with infinite number of samples n. Theorem 1, in contrast, imposes
the weaker condition ?
2R?? r?? L(?? ; Z1n ) , which as we show in the corollaries, allows
for the convergence rates to go to zero as a function of the samples. Second, they assumed much
stronger conditions for their theorem to hold; in Theorem 1 in contrast, we pose much milder
?local? RSC conditions (C3), and a structural incoherence condition (C4).
(R3) The statement in the theorem is deterministic for fixed choices of ( ? ). We also note that
?
the theorem holds for any set of subspace pairs (M? , M? )?2I with respect to which the corresponding regularizers are decomposable. As noted earlier, the M? should ideally be set to
the structured subspace in which the true parameter at least approximately lies, and which we
want to be as small as possible (note that the bound includes a term that depends on the size
of this subspace via the subspace compatibility constant). In particular, if we assume that the
subspaces are chosen so that ?M?
(??? ) = 0 i.e. ??? 2 M? , then we obtain the simpler bound in
?
the following corollary.
Corollary 1. Suppose we solve the M -estimation problem in (1), withPhybrid regularization
R(?; c), where the convex-combination weights c are set as c? = ? / ?2I ? , with ?
2R?? r?? L(?? ; Z1n ) , and suppose conditions (C1) - (C4) are satisfied. Further, suppose that the
subspace-pairs are chosen so that ??? 2 M? . Then, the parameter error bounds are given as:
?
?
3|I|
k?b ?? k ?
max ? ? (M? ).
2?
? ?2I
It is now instructive to compare the bounds of Theorem 1, and Corollary 1. Theorem 1 has two terms:
the first of which is the sole term in the bound in Corollary 1. This first term can be thought of as
the ?estimation error? component of the error bound, when the parameter has exactly the structure
being modeled by the regularizers. The second term can be thought of as the ?approximation error?
component of the error bound, which is the penalty for the parameter not exactly lying in the structured subspaces modeled by the regularizers. The key term in the ?estimation error? component, in
Theorem 1, and Corollary 1, is:
= max ? ? (M? ).
?2I
Note that each ? is larger than a particular norm of the sample score function (gradient of the loss
at the true parameter): since the expected value of the score function is zero, the magnitude of the
sample score function captures the amount of ?noise? in the data. This is in turn scaled by ? (M? ),
which captures the size of the structured subspace corresponding to the parameter component ??? .
can thus be thought of as capturing the amount of noise in the data relative to the particular structure
at hand.
We now provide corollaries showcasing our unified framework for varied statistical models such as
linear regression, multiple regression and principal component analysis, over varied superposition
structures.
4
Convergence Rates for Linear Regression
In this section, we consider the linear regression model:
Y = X?? + w,
(4)
where Y 2 Rn is the observation vector, and ?? 2 Rp is the true parameter. X 2 Rn?p is the
?observation? matrix; while w 2 Rn is the observation noise. For this class of statistical models, we
will consider the instantiation of (1) with the loss function L consisting of the squared loss:
(
)
X
X
2
1
min
Y X
??
+
(5)
? R? (?? ) .
n
(?? )?2I
2
?2I
?2I
For this regularized least squared estimator (5), conditions (C1-C2) in Theorem 1 trivially hold.
The restricted strong convexity condition (C3) reduces to the following. Noting that L(?? + ? )
L(?? ) hr? L(?? ), ? i = n1 kX ? k22 , we obtain the following restricted eigenvalue condition:
5
(D3)
1
n kX
2
? k2
?L k
2
?k
g? R2? (
?)
for all
?
2 ?? .
Finally, our structural incoherence condition reduces to the following: Noting that L(?? +
P
P
P
2
?
1)L(?? )
i in this specific
? ) + (|I|
?) = n
?2I
?2I L(? +
?< hX ? , X
case,
P
P
P
(D4) n2
i ? ?2L ?2I k ? k2 + ?2I h? R2? ( ? ).
?< hX ? , X
4.1
Structural Incoherence with Gaussian Design
We now show that the condition (D4) required for Theorem 1, holds with high probability when the
observation matrix is drawn from a so-called ?-Gaussian ensemble: where each row Xi is independently sampled from N (0, ?). Before doing so, we first state some assumption on the population
covariance matrix ?. Let PM denote the matrix corresponding to the projection operator for the
subspace M . We will then require the following assumption:
n
o
?
3 1
1 (M 1 )
(C-Linear) Let ? := max 1 , 2 2 +
2 I,
? ) . For any ?,
(M
2
n
?
?
? 2 2
?
?
?o
?L
max max PM
, max PM
? |I|
. (6)
? ? ? PM
?
? ? ? PM
? ? ? PM
? ? , max PM
??
?
8 2 ?2 |I|
Proposition 2. Suppose each row Xi of the observation matrix X is independently sampled from
N (0, ?), and the condition (C-Linear) (6) holds. Further, suppose that ?M?
(??? ) = 0, for all
?
4
? 2 I. Then, it holds that with probability at least 1 max{n,p}
,
2 X
?L X
hX ? , X i ?
k ? k22 ,
n
2 ?
?<
? |I| ?2 |I| ?2
( )
when the number of samples scales as n c 2 ?L
max? ? (M? )2 + max{log p, log n} ,
for some constant c that depends only on the distribution of X.
Condition (D3) is the usual restricted eigenvalue condition which has been analyzed previously in
?clean-structured? model estimation, so that we can directly appeal to previous results [10, 12] to
show that it holds with high probability when the observation matrix is drawn from the ?-Gaussian
ensemble.
We are now ready to derive the consequences of the deterministic bound in Theorem 1 for the case
of the linear regression model above.
4.2
Linear Regression with Sparse and Group-sparse structures
We now consider the following superposition structure, comprised of both sparse and group-sparse
structures. Suppose that a set of groups G = {G1 , G2 , . . . , Gq } are disjoint subsets of the indexset {1, . . . , p}, each of size at most |Gi | ? m. Suppose that the linear regression parameter ?? is
a superposition of a group-sparse component ?g? with respect to this set of groups G, as well as a
sparse component ?s? with respect to the remainingPindices {1, . . . , p}\[qi=1 Gi , so that ?? = ?g? +?s? .
Then, we use the hybrid regularization function ?2I ? R? (?? ) = s k?s k1 + g k?g k1,a where
Pq
k?k1,a := t=1 k?Gt ka for a 2.
Corollary 2. Consider the linear model (4) where ?? is the sum of exact s-sparse ?s? and exact sg
group-sparse ?g? . Suppose that each row Xi of the observation matrix X is independently sampled
from N (0, ?). Further, suppose that (6) holds and w is sub-Gaussian with parameter . Then, if we
solve (5) with
r
? 1 1/a r
log p
m
log q
p
and
+
,
s =8
g =8
n
n
n
then, with probability at least 1
k?b
c1 exp( c2 n 2s ) c3 /q 2 , we have the error bound:
r
?r
p
sg m1 1/a
24
s log p
sg log q
?
p
? k2 ?
max
,
+
.
?
?
n
n
n
6
Let us briefly compare the result from Corollary 2 with those from single-structured regularized
estimators. Since the total sparsity of ?? is bounded by k?k0 ? msg + s,??clean? `1 regularized
?
q
(msg +s) log p
?
b
least squares, with high probability, gives the bound [11]: k?`1 ? k2 = O
. On
n
the other hand, the support of ?? also can be interpreted as comprising sg + s disjoint groups in the
worst case, so that ?clean?
regularization
?q `1 /`2 group
? entails, with high probability, the bound [11]:
q
(s
+s)m
(s
+s)
log
q
g
g
?
k?b`1 /`2 ? k2 = O
+
. We can easily verify that Corollary 2 achieves
n
n
better bounds, considering the fact p ? mq.
5
Convergence Rates for Multiple Regression
In this section, we consider the multiple linear regression model, with m linear regressions written
jointly as
Y = X?? + W,
(7)
where Y 2 Rn?m is the observation matrix: with each column corresponding to a separate linear
regression task, and ?? 2 Rp?m is the collated set of parameters. X 2 Rn?p is the ?observation?
matrix; while W 2 Rn?m is collated set of observation noise vectors. For this class of statistical
models, we will consider the instantiation of (1) with the loss function L consisting of the squared
loss:
n1
o
X
X
min
|||Y X
?? |||2F +
(8)
? R? (?? ) .
(?? )?2I n
?2I
?2I
In contrast to the linear regression model in the previous section, the model (7) has a matrixstructured parameter; nonetheless conditions (C3-C4) in Theorem 1 reduce to the following conditions that are very similar to those in the previous section, with the Frobenius norm replacing the
`2 norm:
(D3)
(D4)
1
2
n |||X ? |||F
P
2
?< hhX
n
?L k
?, X
g? R2? ( ? ) for all ? 2 ?? .
P
P
ii ? ?2L ?2I k ? k2 + ?2I h? R2? (
2
?k
? ).
where the notation hhA, Bii denotes the trace inner product, trace(A> B) =
P P
i
j
Aij Bij .
As in the previous linear regression example, we again impose the assumption (C-Linear) on the
population covariance matrix of a ?-Gaussian ensemble, but in this case with the notational change
of PM
? ? denoting the matrix corresponding to projection operator onto the row-spaces of matrices in
>
? ? . Thus, with the low-rank matrix structure discussed in Section 2, we would have PM
M
? ? = UU .
Under the (C-Linear) assumption, the following proposition then extends Proposition 2:
Proposition 3. Consider the problem (8) with the matrix parameter ?. Under the same assumptions
4
as in Proposition 2, we have with probability at least 1 max{n,p}
,
X
X
2
?L
hhX ? , X ii ?
||| ? |||2F .
n
2 ?
?<
Consider an instance of this multiple linear regression model with the superposition structure con?
?
?
sisting of row-sparse, column-sparse and elementwise sparse matrices: ?? = ?P
r +?c +?s . In order
to obtain estimators for this model, we use the hybrid regularization function ?2I ? R? (?? ) =
2,
r k?r kr,a + c k?c kc,a + s k?s k1 where k ? kr,a denotes the sum of `a norm of rows for a
and similarly k ? kc,a is the sum of `a norm of columns, and k ? k1 is entrywise `1 norm for matrix.
Corollary 3. Consider the multiple linear regression model (7) where ?? is the sum of ??r with sr
nonzero rows, ??c with sc nonzero columns, and ??s with s nonzero elements. Suppose that the design
p
matrix X is ?-Gaussian ensemble with the properties of column normalization and max (X) ? n.
Further, suppose that (6) holds and W is elementwise sub-Gaussian with parameter . Then, if we
solve (8) with
r
n m1 1/a r log p o
n p1 1/a r log m o
log p + log m
p
p
,
+
, and c = 8
+
,
s =8
r =8
n
n
n
n
n
7
with probability at least 1
b
k?
6
36
? k2 ?
max
?
?
?
c1 exp( c2 n
?r
c3
p2
2
s)
s(log p + log m)
,
n
p
c3
m2 ,
s r m1
p
n
1/a
b is bounded as:
the error of the estimate ?
+
r
sr log p
,
n
p
sc p1 1/a
p
+
n
r
sc log m
.
n
Convergence Rates for Principal Component Analysis
In this section, we consider the robust/noisy principal component analysis problem, where we are
given n i.i.d. random vectors Zi 2 Rp where Zi = Ui + vi . Ui ? N (0, ?? ) is the ?uncorrupted? set
of observations, with a low-rank covariance matrix ?? = LLT , for some loading matrix L 2 Rp?r .
vi 2 Rp is a noise/error vector; in standard factor analysis, vi is a spherical Gaussian noise vector:
vi ? N (0, 2 Ip?p ) (or vi = 0); and the goal is to recover the loading matrix L from samples.
In PCA with sparse noise, vi ? N (0, ? ), where ? is elementwise sparse. In this case, the covari?
?
ance matrix of Zi has the form ? = ?? + ? , where
is sparse. We can thus
Pn ? is low-rank, and
write the sample covariance model as: Y := n1 i=1 Zi ZiT = ?? + ? + W , where W 2 Rp?p
is a Wishart distributed random matrix. For this class of statistical models, we will consider the
following instantiation of (1):
min |||Y
(?, )
|||2F +
?
? |||?|||1
(9)
k k1 .
+
where ||| ? |||1 denotes the nuclear norm while k ? k1 does the element-wise `1 norm (we will use ||| ? |||2
for the spectral norm.).
In contrast to the previous two examples, (9) includes a trivial design matrix, n
X = Ip?p , which alo
lows (D4) to hold under the simpler (C-linear) condition: where ? is max
max
n
max
?
PM
? ? PM
?
?
,
max
?
?
PM
? ? PM
?? ,
max
?
PM
?
? ? PM
?
?
,
max
?
1, 2
2+
PM
? ? PM
??
?
3
?o
1
2
?
1
?
(M
?
1
)
2 (M 2 )
,
1
. (10)
16?2
Corollary 4. Consider the principal component analysis model where ?? has the rank r at most
and ? has s nonzero entries. Suppose that (10) holds. Then, given the choice of
r
r
p
p
log p
,
= 32?(?)
,
? = 16 |||?|||2
n
n
where ?(?) = maxj ?jj , the optimal error of (9) is bounded by
b
k?
with probability at least 1
r
r
?
p
48
rp
s log p
? k2 ?
max
|||?|||2
, 2?(?)
,
?
?
n
n
?
c1 exp( c2 log p).
Remarks. Agarwal et al. [1] also analyze this model, and propose to use the M -estimator in (9),
with the additional constraint of k?k1 ? ?p . Under a stricter ?global? RSC condition, they
q
p
p
b ?? k2 ? max{ |||?|||2 rp , ?(?) s log p + ? } where ? is a
compute the error bound k?
n
n
p
parameter between 1 and p. This bound is similar to that in Corollary 4, but with an additional
term ?p , so that it does not go to zero as a function of n. It also faces a trade-off: a smaller
value of ? to reduce error bound would make the assumption on the maximum element of ??
stronger as well. Our corollaries do not suffer these lacunae; see also our remarks in (R2) in
Theorem 1. [14] extended the result of [1] to the special case where ?? = ??r + ??s using the
notation of the previous section; the remarks above also apply here. Note that our work and [1]
derive Frobenius error bounds under restricted strong convexity conditions; other recent works
such as [7] also derive such Frobenius error bounds but under stronger conditions (see [1] for
details).
Acknowledgments
We acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, DMS1264033.
8
References
[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex
relaxation: Optimal rates in high dimensions. Annals of Statistics, 40(2):1171?1197, 2012.
[2] E. J. Cand`es, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate
measurements. Communications on Pure and Applied Mathematics, 59(8):1207?1223, 2006.
[3] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of
the ACM, 58(3), May 2011.
[4] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. In 48th Annual Allerton Conference on Communication, Control and
Computing, 2010.
[5] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence
for matrix decomposition. SIAM Journal on Optimization, 21(2), 2011.
[6] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection
via convex optimization. Annals of Statistics (with discussion), 40(4), 2012.
[7] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions.
IEEE Trans. Inform. Theory, 57:7221?7234, 2011.
[8] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In
Neur. Info. Proc. Sys. (NIPS), 23, 2010.
[9] M. McCoy and J. A. Tropp. Two proposals for robust pca using semidefinite programming.
Electron. J. Statist., 5:1123?1160, 2011.
[10] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. Annals of Statistics, 39(2):1069?1097, 2011.
[11] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. Statistical Science, 27
(4):538?557, 2012.
[12] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated
gaussian designs. Journal of Machine Learning Research (JMLR), 99:2241?2259, 2010.
[13] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed
Sensing: Theory and Applications. Cambridge University Press, 2012.
[14] H. Xu and C. Leng. Robust multi-task regression with grossly corrupted observations. Inter.
Conf. on AI and Statistics (AISTATS), 2012.
[15] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. IEEE Transactions on
Information Theory, 58(5):3047?3064, 2012.
9
| 5092 |@word briefly:1 loading:2 norm:20 stronger:3 decomposition:6 covariance:7 score:3 denoting:1 past:1 ka:1 yet:1 written:2 additive:1 sys:1 caveat:2 provides:1 allerton:1 simpler:2 zhang:1 c2:5 become:1 shorthand:1 theoretically:1 inter:1 expected:1 indeed:2 cand:3 nor:1 p1:2 multi:3 decreasing:1 spherical:1 encouraging:1 considering:1 becomes:1 provided:2 estimating:2 notation:3 moreover:1 bounded:4 what:3 interpreted:1 substantially:1 emerging:1 minimizes:1 unified:9 guarantee:3 subclass:1 stricter:1 exactly:4 rm:2 k2:10 scaled:1 control:1 underlie:1 before:1 understood:1 local:1 consequence:1 analyzing:1 incoherence:7 approximately:1 specifying:1 ease:1 statistically:2 practical:1 acknowledgment:1 practice:1 block:1 ance:1 intersect:1 thought:3 matching:1 projection:2 cannot:1 onto:1 selection:1 operator:2 romberg:1 risk:1 equivalent:2 imposed:3 deterministic:2 go:2 independently:3 convex:8 decomposable:10 recovery:1 assigns:1 pure:1 factored:1 estimator:17 insight:1 m2:1 nuclear:3 collated:2 mq:1 population:3 notion:2 coordinate:1 annals:3 target:1 suppose:18 exact:2 programming:1 element:3 capture:6 worst:1 pradeepr:1 trade:1 convexity:4 ui:2 messy:1 ideally:1 easily:1 k0:1 lacuna:1 various:1 caramanis:1 regularizer:1 sc:3 larger:2 solve:4 compressed:2 statistic:4 gi:2 g1:1 jointly:1 latentvariable:1 noisy:3 ip:2 differentiable:1 eigenvalue:3 propose:1 aro:1 interaction:1 gq:1 product:1 mixing:1 frobenius:4 convergence:5 r1:1 derive:5 pose:1 z1n:6 sole:1 zit:1 p2:1 recovering:2 c:2 involves:1 strong:6 uu:1 restate:1 drawback:1 stringent:1 require:3 hx:3 generalization:1 proposition:7 extension:1 strictly:1 hold:12 lying:1 considered:1 wright:1 exp:3 electron:1 achieves:1 adopt:1 estimation:13 proc:1 superposition:16 utexas:2 weighted:3 gaussian:10 super:2 pn:1 mccoy:1 corollary:16 focus:2 notational:1 rank:20 contrast:4 sense:1 milder:1 inaccurate:1 typically:6 kc:2 interested:2 comprising:1 tao:1 compatibility:2 among:2 logn:1 constrained:4 special:2 fairly:1 ruan:1 represents:1 yu:2 others:2 sanghavi:3 individual:1 maxj:1 geometry:1 consisting:2 n1:3 highly:1 analyzed:2 pradeep:1 semidefinite:1 regularizers:5 ambient:1 tuple:1 lh:1 respective:1 incomplete:1 re:1 guidance:1 showcasing:3 rsc:2 instance:8 column:7 modeling:1 earlier:1 w911nf:1 zn:1 applicability:1 cost:1 decomposability:1 subset:2 entry:1 comprised:1 too:1 corrupted:1 considerably:1 vershynin:1 recht:1 negahban:3 siam:1 sequel:2 off:1 squared:3 again:1 satisfied:4 alo:1 wishart:1 conf:1 li:1 parrilo:3 includes:2 coefficient:1 satisfy:1 depends:2 vi:6 analyze:2 doing:1 recover:2 minimize:1 square:1 ensemble:4 correspond:2 resume:1 identification:1 corruption:1 llt:1 inform:1 grossly:1 nonetheless:1 proof:1 con:1 couple:1 hsu:2 sampled:3 hhx:2 follow:1 entrywise:1 strongly:1 generality:1 just:3 hand:2 tropp:1 replacing:1 k22:2 verify:1 true:3 regularization:35 nonzero:4 noted:1 d4:4 stress:1 wise:1 novel:1 common:1 raskutti:1 extend:2 discussed:1 m1:3 elementwise:4 measurement:1 cambridge:1 imposing:1 ai:1 tuning:3 trivially:3 pm:17 similarly:2 mathematics:1 pq:1 stable:1 entail:1 etc:1 gt:1 curvature:1 recent:1 inf:1 uncorrupted:1 seen:2 captured:1 additional:3 greater:1 impose:1 dms1264033:1 signal:1 ii:4 multiple:10 reduces:2 technical:1 long:1 ravikumar:3 qi:1 regression:21 normalization:1 agarwal:4 c1:8 proposal:1 addition:2 want:1 sr:2 subject:1 call:2 structural:17 near:1 yang:1 leverage:1 noting:2 zi:4 lasso:1 reduce:2 simplifies:1 inner:1 texas:2 pca:3 penalty:4 suffer:1 jj:1 remark:4 amount:2 statist:1 nsf:1 disjoint:2 write:1 group:8 key:6 terminology:1 burgeoning:1 drawn:3 d3:3 clean:9 kuk:1 imaging:1 graph:1 vast:1 relaxation:1 sum:20 year:1 inverse:2 extends:1 chandrasekaran:5 infimal:3 appendix:1 scaling:2 capturing:1 bound:22 annual:1 msg:2 constraint:12 precisely:1 argument:1 min:6 extremely:1 optical:1 department:2 structured:17 neur:1 combination:3 smaller:1 kakade:1 outlier:2 invariant:1 restricted:7 previously:1 discus:1 turn:3 r3:1 needed:1 studying:1 pursuit:1 apply:2 away:2 spectral:1 bii:1 rp:11 denotes:3 dirty:9 include:3 graphical:3 unifying:1 k1:9 covari:1 question:1 parametric:1 usual:1 jalali:1 said:1 gradient:1 subspace:29 separate:2 nx:1 trivial:2 willsky:3 assuming:1 index:1 relationship:1 eunho:2 modeled:2 setup:1 statement:1 info:1 trace:2 stated:1 design:4 unknown:2 convolution:3 observation:14 acknowledge:1 extended:1 communication:2 rn:6 varied:6 perturbation:2 pair:15 required:2 extensive:1 z1:1 c3:8 c4:8 nip:1 trans:1 address:2 usually:1 sparsity:6 including:1 max:25 wainwright:4 natural:1 hybrid:7 regularized:4 hr:1 ready:1 review:1 literature:1 sg:4 relative:1 asymptotic:1 loss:18 ingredient:1 consistent:2 imposes:2 austin:2 row:11 last:1 atmost:1 aij:1 allow:1 weaker:2 characterizing:1 face:1 sparse:27 tolerance:1 distributed:1 dimension:2 world:1 commonly:1 collection:1 leng:1 transaction:1 global:1 instantiation:3 assumed:1 xi:3 latent:1 decade:1 learn:3 robust:7 aistats:1 main:1 motivation:2 noise:8 n2:1 body:1 xu:2 formalization:1 structurally:3 position:2 sub:2 xh:1 col:3 lie:1 jmlr:1 bij:1 rk:5 theorem:25 specific:8 sensing:2 list:1 r2:10 appeal:1 hha:1 kr:2 magnitude:1 kx:2 suited:1 contained:1 g2:1 applies:1 minimizer:1 acm:1 ma:1 goal:2 viewed:1 considerable:1 change:1 specifically:2 infinite:1 principal:8 called:1 total:1 e:3 highdimensional:1 support:3 instructive:2 correlated:1 |
4,523 | 5,093 | Summary Statistics for
Partitionings and Feature Allocations
Is??k Bar?s? Fidaner
Computer Engineering Department
Bo?gazic?i University, Istanbul
[email protected]
Ali Taylan Cemgil
Computer Engineering Department
Bo?gazic?i University, Istanbul
[email protected]
Abstract
Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this paper, we
introduce novel statistics based on block sizes for representing sample sets of partitionings and feature allocations. We develop an element-based definition of entropy to quantify segmentation among their elements. Then we propose a simple
algorithm called entropy agglomeration (EA) to summarize and visualize this information. Experiments on various infinite mixture posteriors as well as a feature
allocation dataset demonstrate that the proposed statistics are useful in practice.
1 Introduction
Clustering aims to summarize observed data by grouping its elements according to their similarities.
Depending on the application, clusters may represent words belonging to topics, genes belonging to
metabolic processes or any other relation assumed by the deployed approach. Infinite mixture models provide a general solution by allowing a potentially unlimited number of mixture components.
These models are based on nonparametric priors such as Dirichlet process (DP) [1, 2], its superclass Poisson-Dirichlet process (PDP) [3, 4] and constructions such as Chinese restaurant process
(CRP) [5] and stick-breaking process [6] that enable formulations of efficient inference methods [7].
Studies on infinite mixture models inspired the development of several other models [8, 9] including Indian buffet process (IBP) for infinite feature models [10, 11] and fragmentation-coagulation
process for sequence data [12] all of which belong to Bayesian nonparametrics [13].
In making inference on infinite mixture models, a sample set of partitionings can be obtained from
the posterior.1 If the posterior is peaked around a single partitioning, then the maximum a posteriori
solution will be quite informative. However, in some cases the posterior is more diffuse and one
needs to extract statistical information about the random partitioning induced by the model. This
problem to ?summarize? the samples from the infinite mixture posterior was raised in bioinformatics
literature in 2002 by Medvedovic and Sivaganesan for clustering gene expression profiles [14]. But
the question proved difficult and they ?circumvented? it by using a heuristic linkage algorithm based
on pairwise occurence probabilities [15, 16]. In this paper, we approach this problem and propose
basic methodology for summarizing sample sets of partitionings as well as feature allocations.
Nemenman et al. showed in 2002 that the entropy [17] of a DP posterior was strongly determined
by its prior hyperparameters [18]. Archer et al. recently elaborated these results with respect to
PDP [19]. In other work, entropy was generalized to partitionings by interpreting partitionings
as probability distributions [20, 21]. Therefore, entropy emerges as an important statistic for our
problem, but new definitions will be needed for quantifying information in feature allocations.
1
In methods such as collapsed Gibbs sampling, slice sampling, retrospective sampling, truncation methods
1
In the following sections, we define the problem and introduce cumulative statistics for representing
partitionings and feature allocations. Then, we develop an interpretation for entropy function in
terms of per-element information in order to quantify segmentation among their elements. Finally,
we describe entropy agglomeration (EA) algorithm that generates dendrograms to summarize sample sets of partitionings and feature allocations. We demonstrate EA on infinite mixture posteriors
for synthetic and real datasets as well as on a real dataset directly interpreted as a feature allocation.
2 Basic definitions and the motivating problem
We begin with basic definitions. A partitioning of a set of elements [n] = {1, 2, . . . , n} is a set of
blocks Z = {B1 , . . . , B|Z| } such that Bi ? [n] and Bi 6= ? for all i ? {1, . . . , n}, Bi ? Bj = ?
for all i 6= j, and ?i Bi = [n].2 We write Z ? [n] to designate that Z is a partitioning of [n].3 A
sample set E = {Z (1) , . . . , Z (T ) } from a distribution ?(Z) over partitionings is a multiset such that
Z (t) ? ?(Z) for all t ? {1, . . . , T }. We are required to extract information from this sample set.
Our motivation is the following problem: a set of observed elements (x1 , . . . , xn ) are clustered
by an infinite mixture model with parameters ?(k) for each component k and mixture assignments
(z1 , . . . , zn ) drawn from a two-parameter CRP prior with concentration ? and discount d [5].
z ? CRP (z; ?, d)
?(k) ? p(?)
xi | zi , ? ? F (xi | ?(zi ) )
(1)
(k)
In the conjugate case, all ? can be integrated out to get p(zi | z?i , x) for sampling zi [22]:
?
Z
? nk ?d R F (x |?) p(?|x , z ) d? if k ? K +
i
?i ?i
n?1+?
p(zi | z?i , x) ?
p(z, x, ?) d? ?
+ R
?+dK
?
F (xi |?) p(?) d?
otherwise
n?1+?
(2)
There are K + non-empty components and nk elements in each component k. In each iteration, xi
will either be put into an existing component k ? K + or it will be assigned to a new component. By
sampling all zi repeatedly, a sample set of assignments z (t) are obtained from the posterior p(z | x) =
?(Z). These z (t) are then represented by partitionings Z (t) ? [n]. The induced sample set contains
information regarding (1) CRP prior over partitioning structure given by the hyperparameters (?, d)
and (2) integrals over ? that capture the relation among the observed elements (x1 , . . . , xn ).
In addition, we aim to extract information from feature allocations, which constitute a superclass of
partitionings [11]. A feature allocation of [n] is a multiset of blocks F = {B1 , . . . , B|F | } such that
Bi ? [n] and Bi 6= ? for all i ? {1, . . . , n}. A sample set E = {F (1) , . . . , F (T ) } from a distribution
?(F ) over feature allocations is a multiset such that F (t) ? ?(F ) for all t. Current exposition will
focus on partitionings, but we are also going to show how our statistics apply to feature allocations.
Assume that we have obtained a sample set E of partitionings. If it was obtained by sampling from
an infinite mixture posterior, then its blocks B ? Z (t) correspond to the mixture components. Given
a sample set E, we can approximate any statistic f (Z) over ?(Z) by averaging it over the set E:
Z (1) , . . . , Z (T ) ? ?(Z)
T
1 X
f (Z (t) ) ? h f (Z) i?(Z)
T t=1
?
(3)
Which f (Z) would be a useful statistic for Z? Three statistics commonly appear in the literature:
First one is the number of blocks |Z|, which has been analyzed theoretically for various nonparametric priors [2, 5]. It is simple, general and exchangable with respect to the elements [n], but it is
not very informative about the distribution ?(Z) and therefore is not very useful in practice.
A common statistic is pairwise occurence, which is used to extract information from infinite mixture
posteriors in applications like bioinformatics [14]. For
P given pairs of elements {a, b}, it counts the
number of blocks that contain these pairs, written i [{a, b} ? Bi ]. It is a very useful similarity
measure, but it cannot express information regarding relations among three or more elements.
Another statistic is exact block size distribution (referred to as ?multiplicities?
in [11, 19]). It counts
P
the partitioning?s blocks that contain exactly k elements, written i [|Bi | = k]. It is exchangable
with respect to the elements [n], but its weighted average over a sample set is difficult to interpret.
2
3
We use the term ?partitioning? to indicate a ?set partition? as distinguished from an integer ?partition?.
The symbol ??? is usually used for integer partitions, but here we use it for partitionings (=set partitions).
2
Let us illustrate the problem by a practical example, to which we will return in the formulations:
Z (1) = {{1, 3, 6, 7}, {2}, {4, 5}}
E3 = {Z
(1)
,Z
(2)
,Z
(3)
}
S1 = {1, 2, 3, 4}
Z
(2)
= {{1, 3, 6}, {2, 7}, {4, 5}}
S2 = {1, 3, 6, 7}
Z
(3)
= {{1, 2, 3, 6, 7}, {4, 5}}
S3 = {1, 2, 3}
Suppose that E3 represents interactions among seven genes. We want to compare the subsets of
these genes S1 , S2 , S3 . The projection of a partitioning Z ? [n] onto S ? [n] is defined as the set
of non-empty intersections between S and B ? Z. Projection onto S induces a partitioning of S.
P ROJ(Z, S) = {B ? S}B?Z \{?}
?
P ROJ(Z, S) ? S
(4)
Let us represent gene interactions in Z (1) and Z (2) by projecting them onto each of the given subsets:
P ROJ(Z (1) , S1 ) = {{1, 3}, {2}, {4}}
P ROJ(Z (2) , S1 ) = {{1, 3}, {2}, {4}}
P ROJ(Z (1) , S2 ) = {{1, 3, 6, 7}}
P ROJ(Z (2) , S2 ) = {{1, 3, 6}, {7}}
P ROJ(Z (1) , S3 ) = {{1, 3}, {2}}
P ROJ(Z (2) , S3 ) = {{1, 3}, {2}}
Comparing S1 to S2 , we can say that S1 is ?more segmented? than S2 , and therefore genes in S2
should be more closely related than those in S1 . However, it is more subtle and difficult to compare
S2 to S3 . A clear understanding would allow us to explore the subsets S ? [n] in an informed
manner. In the following section, we develop a novel and general approach based on block sizes that
opens up a systematic method for analyzing sample sets over partitionings and feature allocations.
3 Cumulative statistics to represent structure
We definePcumulative block size distribution, or ?cumulative statistic? in short, as the function
?k (Z) = i [|Bi | ? k], which counts the partitioning?s blocks of size at least k. We can rewrite the
previous statistics: number of blocks as ?1 (Z), exact block size distribution as ?k (Z) ? ?k+1 (Z),
and pairwise occurence as ?2 (P ROJ(Z, {a, b})). Moreover, cumulative statistics satisfy the following property: for partitionings of [n], ?(Z) always sums up to n, just like a probability mass
function that sums up to 1. When blocks of Z are sorted according to their sizes and the indicators
[|Bi | ? k] are arranged on a matrix as in Figure 1a, they form a Young diagram, showing that ?(Z)
is always the conjugate partition of the integer partition of Z. As a result, ?(Z) as well as weighted
averages over several ?(Z) always sum up to n, just like taking averages over probability mass functions (Figure 2). Therefore, cumulative statistics of a random partitioning ?conserve mass?. In the
Z (1) = {{1, 3, 6, 7}, {2}, {4, 5}}
1
B1 = {2}
B2 = {4, 5} 2
B3 = {1, 3, 6, 7} 4
P ROJ(Z (1) , S1 ) = {{1, 3}, {2}, {4}}
|B1 | ? 1
B1 = {2}
1
|B1 | ? 1
|B2 | ? 1 |B2 | ? 2
B2 = {4}
1
|B2 | ? 1
B3 = {1, 3}
2
|B3 | ? 1 |B3 | ? 2
|B3 | ? 1 |B3 | ? 2 |B3 | ? 3 |B3 | ? 4
?(Z (1) ) =
3
2
1
?(P ROJ(Z (1) , S1 )) =
1
3
1
(b) For its projection onto a subset
(a) Cumulative block size distribution for a partitioning
4
3
3
2
1
0
1
2
3
k
4
5
2
1
0
1
2
3
k
4
2
1
0
5
Average over three
4
3
?(Z (3) )
4
?(Z (2) )
?(Z (1) )
Figure 1: Young diagrams show the conjugacy between a partitioning and its cumulative statistic
1
2
3
k
4
5
4
3
2
1
0
1
2
3
k
Figure 2: Cumulative statistics of the three examples and their average: all sum up to 7
3
4
5
case of feature allocations, since elements can be omitted or repeated, this property does not hold.
Z ? [n]
n
X
?
?
?k (Z) = n
k=1
n
X
h ?k (Z) i?(Z) = n
(5)
k=1
When we project the partitioning Z onto a subset S ? [n], the resulting vector ?(P ROJ(Z, S))
will then sum up to |S| (Figure 1b). A ?taller? Young diagram implies a ?more segmented? subset.
We can form a partitioning Z by inserting elements 1, 2, 3, 4, . . . into its blocks (Figure 3a). In
such a scheme, each step brings a new element and requires a new decision that will depend on all
previous decisions. It would be better if we could determine the whole path by few initial decisions.
Now suppose that we know Z from the start and we generate an incremental sequence of subsets
S1 = {1}, S2 = {1, 2}, S3 = {1, 2, 3}, S4 = {1, 2, 3, 4}, . . . according to a permutation of [n]:
? = (1, 2, 3, 4, . . . ). We can then represent any path in Figure 3a by a sequence of P ROJ(Z, Si )
and determine the whole path by two initial parameters: Z and ?. The resulting tree can be simplified
by representing the partitionings by their cumulative statistics instead of their blocks (Figure 3b).
Based on this concept, we define cumulative occurence distribution (COD) as the triangular matrix
of incremental cumulative statistic vectors, written ?i,k (Z, ?) = ?k (P ROJ(Z, Si )) where Z ? [n],
? is a permutation of [n] and Si = {?1 , . . . , ?i } for i ? {1, . . . , n}. COD matrices for two extreme
paths (Figure 3c, 3e) and for the example partitioning Z (1) (Figure 3d) are shown. For partitionings,
ith row of a COD matrix always sums up to i, even when averaged over a sample set as in Figure 4.
Z ? [n]
?
i
X
?
?i,k (Z, ?) = i
k=1
i
X
h ?i,k (Z, ?) i?(Z) = i
(6)
k=1
Expected COD matrix of a random partitioning expresses (1) cumulation of elements by the differences between its rows, and (2) cumulation of block sizes by the differences between its columns.
As an illustrative example, consider ?(Z) = CRP (Z|?, d). Since CRP is exchangable and projective, its expected cumulative statistic h?(Z)i?(Z) for n elements depends only on its hyperparameters (?, d). As a result, its expected COD matrix ? = h?(Z, ?)i?(Z) is independent of ?, and it
{{1}, {2}, {3}, {4}}
{{1}, {2}}
{{1, 3}, {2}}
{{1}, {2, 3}}
{{1, 2}, {3}}
{{1, 4}, {2}, {3}}
{{1}, {2, 4}, {3}}
{{1}, {2}, {3, 4}}
{{1, 3}, {2}, {4}}
{{1}, {2, 3}, {4}}
{{1, 2}, {3}, {4}}
(3, 0, 0)
{{1, 3}, {2, 4}}
{{1, 4}, {2, 3}}
{{1, 2}, {3, 4}}
(2, 0)
(2, 1, 0)
(3, 1, 0, 0) ?1.04
(2, 2, 0, 0) ?0.69
(2, 1, 1, 0) ?0.56
partition entropy
{{1}, {2}, {3}}
(4, 0, 0, 0) ?1.39
{{1, 3, 4}, {2}}
{{1}, {2, 3, 4}}
{{1, 2, 4}, {3}}
{{1, 2, 3}, {4}}
{{1}}
{{1, 2}}
{{1, 2, 3}}
{{1, 2, 3, 4}}
(a) Form a partitioning by inserting elements
(1)
(1, 1)
(1, 1, 1)
(1, 1, 1, 1) ?0
(b) Form the statistic vector by inserting elements
1
1 1
1
1 1
2 2 0
2 0
1 1 1
3 2 1 0
3 0 0
1 1 1 1
4 3 1 0 0
4 0 0 0
1 1 1 1 1
5 3 2 0 0 0
5 0 0 0 0
1 1 1 1 1 1
6 3 2 1 0 0 0
6 0 0 0 0 0
1 1 1 1 1 1 1
7 3 2 1 1 0 0 0
7 0 0 0 0 0 0
(c) All elements into one block
(d) COD matrix ?(Z
(1)
, (1, . . . , 7))
(e) Each element into a new block
Figure 3: Three COD matrices correspond to the three red dotted paths on the trees above
4
1
1
1 1.0
0.8
entropy
2 1.7 0.3
3 1.7 1.0 0.3
4 2.7 1.0 0.3 0.0
5 2.7 2.0 0.3 0.0 0.0
3 1.0 1.0
0.6
6 1.0 1.0 1.0
0.4
7 1.3 1.0 1.0 0.7
2 1.7 1.3 1.0 0.7 0.3
0.2
6 2.7 2.0 1.0 0.3 0.0 0.0
7 2.7 2.3 1.0 0.7 0.3 0.0 0.0
0.8
0
entropy
1 1.0
2
3
4
5
6
5 2.7 2.3 1.0 0.7 0.3 0.0 0.0
7
0.4
0.2
4 2.7 2.3 1.0 0.7 0.3 0.0
1
0.6
0
1
3
6
7
2
4
5
Figure 4: CODs and entropies over E3 for permutations (1, 2, 3, 4, 5, 6, 7) and (1, 3, 6, 7, 2, 4, 5)
satisfies an incremental formulation with the parameters (?, d) over the indices i ? N, k ? Z+ :
?
? ?+d?i,k
if k = 1
i+?
?0,k = 0
?i+1,k = ?i,k +
(7)
(k?1?d)(?
??
)
i,k?1
i,k
?
otherwise
i+?
? ?d ,
By allowing k = 0 and setting ?i,0 =
and ?0,k = 0 for k > 0 as the two boundary conditions,
the same matrix can be formulated by a difference equation over the indices i ? N, k ? N:
(?i+1,k ? ?i,k )(i + ?) = (?i,k?1 ? ?i,k )(k ? 1 ? d)
(8)
By setting ? = ?(0) we get an infinite sequence of matrices ?(m) that satisfy the same equation:
(m)
(m)
(m)
(m)
(m+1)
(?i+1,k ? ?i,k )(i + ?) = (?i,k?1 ? ?i,k )(k ? 1 ? d) = ?i,k
(9)
Therefore, expected COD matrix of a CRP-distributed random partitioning is at a constant ?equilibrium? determined by ? and d. This example shows that the COD matrix can reveal specific information about a distribution over partitionings; of course in practice we encounter non-exchangeable
and almost arbitrary distributions over partitionings (e.g., the posterior distribution of an infinite
mixture), therefore in the following section we will develop a measure to quantify this information.
4 Entropy to quantify segmentation
Shannon?s entropy [17] can be an appropriate quantity to measure ?segmentation? with respect to
partitionings, which can be interpreted as probability distributions [20, 21]. Since this interpretation
does not cover feature allocations, we will make an alternative, element-based definition of entropy.
How does a block B inform us about its elements? Each element has a proportion 1/|B|, let us call
this quantity per-element segment size. Information is zero for |B| = n, since 1/n is the minimum
possible segment size. If |B| < n, the block supplies positive information since the segment size is
larger than minimum, and we know that its segment size could be smaller if the block were larger.
To quantify this information, we define per-element information for a block B as the integral of
segment size 1/s over the range [|B|, n] of block sizes that make this segment smaller (Figure 5).
Z n
1
n
pein (B) =
ds = log
(10)
s
|B|
|B|
|B|
log
n
|B|
1
n
n
Figure 5: Per-element information for B
n
|B|
1
|B|
0.4
0.2
|B|
n
1
s
log
In pein (B), n is a ?base? that determines the minimum possible per-element segment size. Since
segment size expresses the significance of elements, the function integrates segment sizes over the
block sizes that make the elements less significant. This definition is comparable to the well-known
p-value, which integrates probabilities over the values that make the observations more significant.
0
2
4
6
8
block size |B|
10
12
Figure 6: Weighted information plotted for each n
5
1
1 log 2
2
1
0
b?B
1 log 2
2
1
0
0
0
a?B
1
0
0.5
0
b?B
0
1
0
2
4
6
8
number of elements n
10
12
Figure 7: H(Z) in incremental construction of Z
1 log 3
3
1
0
0
c?B
a?B
b?B
2 log 3
3
2
S = {a, b, c}
partition entropy H(Z)
0
1.5
a?B
b?B
S = {a, b}
a?B
2
0
Projection entropy:
H(P ROJ(Z, S))
Subset
P occurence:
i [S ? Bi ]
2.5
1 log 3
3
1
0
0
2 log 3
2 log 3
3
2
3
2
1 log 3
3
1
0
c?B
Figure 8: Comparing two subset statistics
We can then compute the per-element information supplied by a partitioning Z, by taking a weighted
average over its blocks, since each block B ? Z supplies information for a different proportion
|B|/n of the elements being partitioned. For large n, weighted per-element information reaches its
maximum near |B| ? n/2 (Figure 6). Total weighted information for Z gives Shannon?s entropy
function [17] which can be written in terms of the cumulative statistics (assuming ?n+1 = 0):
|Z|
|Z|
n
X
X
X
|Bi |
|Bi |
n
k
n
H(Z) =
pein (Bi ) =
log
=
(?k (Z) ? ?k+1 (Z)) log
n
n
|B
|
n
k
i
i=1
i=1
(11)
k=1
Entropy of a partitioning increases as its elements become more segmented among themselves. A
partitioning with a single block has zero entropy, and a partitioning with n blocks has the maximum
entropy log n. Nodes of the tree we examined in the previous section (Figure 3b) were vertically
arranged according to their entropies. On the extended tree (Figure 7), nth column of nodes represent
the possible partitionings of n. This tree serves as a ?grid? for both H(Z) and ?(Z), as they are
n
linearly related with the general coefficient ( nk log nk ? k?1
n log k?1 ). A similar grid for feature
allocations can be generated by inserting nodes for cumulative statistics that do not conserve mass.
To quantify the segmentation of a subset S, we compute projection entropy H(P ROJ(Z, S)). To
understand this function, we compare it to subset occurence in Figure 8. Subset occurence acts as a
?score? that counts the ?successful? blocks that contain all of S, whereas projection entropy acts as a
?penalty? that quantifies how much S is being divided and segmented by the given blocks B ? Z.
A partitioning Z and a permutation ? of its elements induce an entropy sequence (h1 , . . . , hn ) such
that hi (Z, ?) = H(P ROJ(Z, Si )) where Si = {?1 , . . . , ?i } for i ? {1, . . . , n}. To find subsets of
elements that are more closely related, one can seek permutations ? that keep the entropies low. The
generated subsets Si will be those that are less segmented by B ? Z. For the example problem, the
permutation 1, 3, 6, 7, . . . keeps the expected entropies lower, compared to 1, 2, 3, 4, . . . (Figure 4).
5 Entropy agglomeration and experimental results
We want to summarize a sample set using the proposed statistics. Permutations that yield lower
entropy sequences can be meaningful, but a feasible algorithm can only involve a small subset of the
n! permutations. We define entropy agglomeration (EA) algorithm, which begins from 1-element
subsets, and merges in each iteration the pair of subsets that yield the minimum expected entropy:
Entropy Agglomeration Algorithm:
1. Initialize ? ? {{1}, {2}, . . . , {n}}.
2. Find the subset pair {Sa , Sb } ? ? that minimizes the entropy h H(P ROJ(Z, Sa ? Sb )) i?(Z) .
3. Update ? ? (?\{Sa , Sb }) ? {Sa ? Sb }.
4. If |?| > 1 then go to 2.
5. Generate the dendrogram of chosen pairs by plotting minimum entropies for every split.
6
The resulting dendrogram for the example partitionings are shown in Figure 9a. The subsets {4, 5}
and {1, 3, 6} are shown in individual nodes, because their entropies are zero. Besides using this
dendrogram as a general summary, one can also generate more specific dendrograms by choosing specific elements or specific parts of the data. For a detailed element-wise analysis, entropy
sequences of particular permutations ? can be assessed. Entropy Agglomeration is inspired by ?agglomerative clustering?, a standard approach in bioinformatics [23]. To summarize partitionings
of gene expressions, [14] applied agglomerative clustering by pairwise occurences. Although very
useful and informative, such methods remain ?heuristic? because they require a ?linkage criterion? in
merging subsets. EA avoids this drawback, since projection entropy is already defined over subsets.
To test the proposed algorithm, we apply it to partitionings sampled from infinite mixture posteriors.
In the first three experiments, data is modeled by an infinite mixture of Gaussians, where ? =
0.05, d = 0, p(?) = N (?|0, 5) and F (x|?) = N (x|?, 0.15) (see Equation 1). Samples from the
posterior are used to plot the histogram over the number of blocks, pairwise occurences, and the
EA dendrogram. Pairwise occurences are ordered according to the EA dendrogram. In the fourth
experiment, EA is directly applied on the data. We describe each experiment and make observations:
1) Synthetic data (Figure 9b): 30 points on R2 are arranged in three clusters. Plots are based on
450 partitionings from the posterior. Clearly separating the three clusters, EA also reflects their
qualitative differences. The dispersedness of the first cluster is represented by distinguishing ?inner?
elements 1, 10, from ?outer? elements 6, 7. This is also seen as shades of gray in pairwise occurences.
2) Iris flower data (Figure 9c): This well-known dataset contains 150 points on R4 from three
flower species [24]. Plots are based on 150 partitionings obtained from the posterior. For convenience, small subtrees are shown as single leaves and elements are labeled by their species. All of
50 A points appear in a single leaf, as they are clearly separated from B and C. The dendrogram
automatically scales to cover the points that are more uncertain with respect to the distribution.
3) Galactose data (Figure 9d): This is a dataset of gene expressions by 820 genes in 20 experimental
conditions [25]. First 204 genes are chosen, and first two letters of gene names are used for labels.
Plots are based on 250 partitionings from the posterior. 70 RP (ribosomal protein) genes and 12
HX (hexose transport) genes appear in individual leaves. In the large subtree on the top, an ?outer?
grouping of 19 genes (circles in data plot) is distinguished from the ?inner? long tail of 68 genes.
4) IGO (Figure 9e): This is a dataset of intergovernmental organizations (IGO) [26,v2.1] that contains IGO memberships of 214 countries through the years 1815-2000. In this experiment, we take
a different approach and apply EA directly on the dataset interpreted as a sample set of single-block
feature allocations, where the blocks are IGO-year tuples and elements are the countries. We take
the subset of 138 countries that appear in at least 1000 of the 12856 blocks. With some exceptions, the countries display a general ordering of continents. From the ?outermost? continent to the
?innermost? continent they are: Europe, America-Australia-NZ, Asia, Africa and Middle East.
6 Conclusion
In this paper, we developed a novel approach for summarizing sample sets of partitionings and feature allocations. After presenting the problem, we introduced cumulative statistics and cumulative
occurence distribution matrices for each of its permutations, to represent a sample set in a systematic
manner. We defined per-element information to compute entropy sequences for these permutations.
We developed entropy agglomeration (EA) algorithm that chooses and visualises a small subset of
these entropy sequences. Finally, we experimented with various datasets to demonstrate the method.
Entropy agglomeration is a simple algorithm that does not require much knowledge to implement,
but it is conceptually based on the cumulative statistics we have presented. Since we primarily aimed
to formulate a useful algorithm, we only made the essential definitions, and several points remain
to be elucidated. For instance, cumulative statistics can be investigated with respect to various
nonparametric priors. Our definition of per-element information can be developed with respect to
information theory and hypothesis testing. Last but not least, algorithms like entropy agglomeration
can be designed for summarization tasks concerning various types of combinatorial sample sets.
Acknowledgements
We thank Ayc?a Cankorur, Erkan Karabekmez, Duygu Dikicio?glu and Bet?ul K?rdar from Bo?gazic?i
University Chemical Engineering for introducing us to this problem by very helpful discussions.
? ?ITAK (110E292) and BAP (6882-12A01D5).
This work was funded by TUB
7
Galactose data:
Number of blocks
(a) Example partitionings:
2
Z (1) = {{1, 3, 6, 7}, {2}, {4, 5}}
1.5
Z (2) = {{1, 3, 6}, {2, 7}, {4, 5}}
1
Z (3) = {{1, 2, 3, 6, 7}, {4, 5}}
0.5
0
2
4, 5
3
Pairwise occurences
2
4
5
2
7
1
3
6
7
1, 3, 6
0
0.2
0.4 0.6
entropy
0.8
4 5 2 7 1 3 6
(b) Synthetic data:
1.5
19
1
15
16
14
17
13
18
12
7
20 11
0.5
6
15 10
0
9
?0.5
25
30
3
4
2
26
8
22
29 24 28
23
21
27
?1
?1.5
?1.5
?1
?0.5
0
0.5
1
1.5
Number of blocks
7
6
8
3
9
5
4
2
10
1
19
15
17
16
20
14
18
12
13
11
30
25
28
26
24
22
29
27
23
21
100
50
0
3
5
7
9 11 13
Pairwise occurences
10
20
0
1
entropy
30
10
20
30
(c) Iris flower data:
Number of blocks
60
40
20
0
5
6
7
8
Pairwise occurences
50
100
150
50
100
2
2C
1C
1C
2C
2C
4C
12 C
1C
1C
8C
1C
1B
50 A, 1 B, 1 C
1B
1B
2B
2B
2B
9B
1B
5 B, 1 C
1B
1B
1B
1B
2B
1C
11 B, 8 C
3C
8 B, 1 C
B
C
0
?1
?2
?4
?2
0
2
4
(PCA projection R4 ? R2 )
(d) Galactose data:
HX
2
RP
others
0
?2
0 1
entropy
150
A
1
?4
?4
?3
?2
?1
0
1
2
(PCA projection R20 ? R2 )
Number of blocks
Pairwise occurences
50
50
40
30
100
20
150
10
0
0
200
9
11
13
15
17
19
1 SN
1 PR
1 SL
1 PR
1 PR
1 PA
1 LS
1 SM
1 SR
1 HA
1 HA
1 SN
1 SM
1 SR
1 MT
1 SL
1 MT
1 FI
1 AB
1 SR
1 TF
1 FI
1 NT
1 ME
1 NT
1 PR
1 AB
1 PR
1 RR
1 PR
1 HA
1 LS
1 PA
1 TF
1 PR
1 PR
1 HA
1 SR
1 ME
1 SR
1 SR
1 SM
1 SN
1 SM
1 SR
1 PR
1 SL
1 MT
1 MT
1 LS
1 PR
1 SL
1 RR
1 NC
1 SN
1 HA
1 SN
1 AB
1 HP
1 PR
1 PA
1 LS
1 HP
1 NC
1 SR
1 AB
1 PR
1 TF
1 SR
1 PR
1 SN
1 HP
1 NC
1 FI
1 FI
1 ME
1 PR
1 RR
1 ME
1 NT
1 NC
1 HA
1 PR
1 HA
1 PR
1 TF
1 HA
1 ST
12 HX
1 SN
1 ST
1 SN
1 SN
1 SN
1 HP
1 PA
1 ST
1 ST
1 NT
1 SR
1 SN
1 SR
1 RR
1 SN
1 SR
1 RP
1 RP
1 RP
1 RP
1 RP
1 RP
70 RP, 4 YD
1 CD
1 CD
1 PG
1 PG
1 CD
1 CD
1 PG
1 PG
50
100
150
200
(e) IGO data:
germany
russia
poland
hungary
romania
bulgaria
luxembourg
ireland
spain
portugal
italy
greece
uk
france
netherlands
belgium
wgermany
iceland
norway
finland
sweden
denmark
yugoslaviaserb
switzerland
austria
usa
japan
canada
soafrica
newzealand
australia
cuba
haiti
domrepublic
nicaragua
guatemala
honduras
elsalvador
panama
costarica
venezuela
ecuador
peru
colombia
uruguay
chile
paraguay
bolivia
mexico
brazil
argentina
trinidad
jamaica
guyana
barbados
suriname
grenada
bahamas
czechoslovakia
albania
thailand
philippines
malaysia
indonesia
srilanka
pakistan
india
sokorea
china
vietnam
singapore
papuanewguinea
fiji
nepal
myanmar
bangladesh
laos
cambodia
afghanistan
nigeria
ghana
liberia
sierraleone
gambia
madagascar
ethiopia
zaire
rwanda
burundi
mauritius
zambia
malawi
uganda
tanzania
kenya
guineabissau
eqguinea
zimbabwe
mozambique
swaziland
lesotho
botswana
sudan
somalia
mauritania
gabon
cameroon
chad
congobrazz
car
senegal
ivorycoast
mali
niger
burkinafaso
guinea
togo
benin
turkey
iran
israel
malta
cyprus
egypt
tunisia
morocco
algeria
syria
lebanon
libya
jordan
saudiarabia
kuwait
iraq
oman
bahrain
uae
qatar
0.5
1
1.5
entropy
0
0.2
Figure 9: Entropy agglomeration and other results from the experiments (See the text)
8
0.4 0.6
entropy
0.8
References
[1] Ferguson, T. S. (1973) A Bayesian analysis of some nonparametric problems.
1(2):209?230.
Annals of Statistics,
[2] Teh, Y. W. (2010) Dirichlet Processes. In Encyclopedia of Machine Learning. Springer.
[3] Kingman, J. F. C. (1992). Poisson processes. Oxford University Press.
[4] Pitman, J., & Yor, M. (1997) The two-parameter Poisson?Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855-900.
[5] Pitman, J. (2006) Combinatorial Stochastic Processes. Lecture Notes in Mathematics. Springer-Verlag.
[6] Sethuraman, J. (1994) A constructive definition of Dirichlet priors. Statistica Sinica, 4, 639-650.
[7] Neal, R. M. (2000) Markov chain sampling methods for Dirichlet process mixture models, Journal of Computational and Graphical Statistics, 9:249?265.
[8] Meeds, E., Ghahramani, Z., Neal, R., & Roweis, S. (2007) Modelling dyadic data with binary latent factors.
In Advances in Neural Information Processing 19.
[9] Teh, Y. W., Jordan, M. I., Beal, M. J., & Blei, D. M. (2006) Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581.
[10] Griffiths, T. L. and Ghahramani, Z. (2011) The Indian buffet process: An introduction and review. Journal
of Machine Learning Research, 12:1185?1224.
[11] Broderick, T., Pitman, J., & Jordan, M. I. (2013). Feature allocations, probability functions, and paintboxes.
arXiv preprint arXiv:1301.6647.
[12] Teh, Y. W., Blundell, C., & Elliott, L. T. (2011). Modelling genetic variations with fragmentationcoagulation processes. In Advances in Neural Information Processing Systems 23.
[13] Orbanz, P. & Teh, Y. W. (2010). Bayesian Nonparametric Models. In Encyclopedia of Machine Learning.
Springer.
[14] Medvedovic, M. & Sivaganesan, S. (2002) Bayesian infinite mixture model based clustering of gene expression profiles. Bioinformatics, 18:1194?1206.
[15] Medvedovic, M., Yeung, K. and Bumgarner, R. (2004) Bayesian mixture model based clustering of replicated microarray data. Bioinformatics 20:1222?1232.
[16] Liu X., Sivanagesan, S., Yeung, K.Y., Guo, J., Bumgarner, R. E. and Medvedovic, M. (2006) Contextspecific infinite mixtures for clustering gene expression profiles across diverse microarray dataset. Bioinformatics, 22:1737-1744.
[17] Shannon, C. E. (1948) A Mathematical Theory of Communication.
27(3):379?423.
Bell System Technical Journal
[18] I. Nemenman, F. Shafee, & W. Bialek. (2002) Entropy and inference, revisited. In Advances in Neural
Information Processing Systems, 14.
[19] Archer, E., Park, I. M., & Pillow, J. (2013) Bayesian Entropy Estimation for Countable Discrete Distributions. arXiv preprint arXiv:1302.0328.
[20] Simovici, D. (2007) On Generalized Entropy and Entropic Metrics. Journal of Multiple Valued Logic and
Soft Computing, 13(4/6):295.
[21] Ellerman, D. (2009) Counting distinctions: on the conceptual foundations of Shannon?s information theory.
Synthese, 168(1):119-149.
[22] Neal, R. M. (1992) Bayesian mixture modeling, in Maximum Entropy and Bayesian Methods: Proceedings
of the 11th International Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis,
Seattle, 1991, eds, Smith, Erickson, & Neudorfer, Dordrecht: Kluwer Academic Publishers, 197-211.
[23] Eisen, M. B., Spellman, P. T., Brown, P. O., & Botstein, D. (1998) Cluster analysis and display of genomewide expression patterns. Proceedings of the National Academy of Sciences, 95(25):14863-14868.
[24] Fisher, R. A. (1936) The use of multiple measurements in taxonomic problems. Annals of Eugenics,
7(2):179-188.
[25] Ideker, T., Thorsson, V., Ranish, J. A., Christmas, R., Buhler, J., Eng, J. K., Bumgarner, R., Goodlett, D. R.,
Aebersold, R. & Hood, L. (2001) Integrated genomic and proteomic analyses of a systematically perturbed
metabolic network. Science, 292(5518):929-934.
[26] Pevehouse, J. C., Nordstrom, T. & Warnke, K. (2004) The COW-2 International Organizations Dataset Version 2.0.
Conflict Management and Peace Science 21(2):101-119.
http://www.correlatesofwar.org/COW2%20Data/IGOs/IGOv2-1.htm
9
| 5093 |@word version:1 middle:1 proportion:2 open:1 cyprus:1 seek:1 eng:1 innermost:1 pg:4 tr:1 initial:2 liu:1 contains:3 score:1 qatar:1 genetic:1 africa:1 existing:1 current:1 comparing:2 cumulation:2 nt:4 si:6 written:4 indonesia:1 partition:8 informative:3 plot:5 designed:1 update:1 malaysia:1 leaf:3 chile:1 ith:1 smith:1 short:1 blei:1 multiset:3 node:4 revisited:1 org:2 coagulation:1 synthese:1 mathematical:1 become:1 supply:2 qualitative:1 manner:2 introduce:2 theoretically:1 pairwise:11 expected:6 themselves:1 inspired:2 automatically:1 begin:2 project:1 moreover:1 spain:1 mass:4 israel:1 interpreted:3 minimizes:1 developed:3 sivaganesan:2 informed:1 argentina:1 every:1 act:2 exactly:1 stick:1 partitioning:55 exchangeable:1 uk:1 kuwait:1 appear:4 positive:1 engineering:3 vertically:1 cemgil:2 analyzing:1 oxford:1 path:5 yd:1 nz:1 china:1 examined:1 r4:2 nigeria:1 projective:1 bi:14 range:1 averaged:1 practical:1 hood:1 testing:1 practice:3 block:41 implement:1 bell:1 goodlett:1 projection:9 word:1 induce:1 griffith:1 protein:1 get:2 cannot:1 onto:5 convenience:1 put:1 collapsed:1 luxembourg:1 www:1 go:1 l:4 formulate:1 occurences:8 colombia:1 kenya:1 dendrograms:2 variation:1 brazil:1 annals:3 construction:2 suppose:2 exact:2 distinguishing:1 hypothesis:1 pa:4 element:48 conserve:2 afghanistan:1 iraq:1 labeled:1 iceland:1 observed:3 preprint:2 capture:1 tunisia:1 ordering:1 broderick:1 depend:1 rewrite:1 segment:9 ali:1 meed:1 htm:1 various:5 represented:2 america:1 separated:1 describe:2 cod:10 monte:1 choosing:1 dordrecht:1 quite:1 heuristic:2 larger:2 valued:1 say:1 otherwise:2 triangular:1 statistic:31 beal:1 sequence:9 rr:4 propose:2 interaction:2 inserting:4 bahrain:1 hungary:1 sudan:1 roweis:1 academy:1 cuba:1 aebersold:1 seattle:1 cluster:5 empty:2 ayc:1 incremental:4 depending:1 develop:4 illustrate:1 ibp:1 sa:4 indicate:1 implies:1 quantify:6 switzerland:1 philippine:1 closely:2 drawback:1 nicaragua:1 proteomic:1 stochastic:1 australia:2 enable:1 require:2 hx:3 clustered:1 designate:1 hold:1 around:1 taylan:2 equilibrium:1 bj:1 visualize:1 genomewide:1 finland:1 entropic:1 omitted:1 belgium:1 estimation:1 integrates:2 label:1 combinatorial:2 nordstrom:1 tf:4 weighted:6 reflects:1 clearly:2 genomic:1 always:4 aim:2 bet:1 derived:1 focus:1 modelling:2 summarizing:2 posteriori:2 inference:3 helpful:1 membership:1 sb:4 istanbul:2 integrated:2 ferguson:1 relation:3 archer:2 going:1 france:1 germany:1 among:6 development:1 raised:1 initialize:1 sampling:7 represents:1 park:1 peaked:1 others:1 roj:18 few:1 primarily:1 national:1 individual:2 ab:4 nemenman:2 organization:2 galactose:3 mixture:22 analyzed:1 extreme:1 visualises:1 chain:1 subtrees:1 integral:2 bulgaria:1 sweden:1 tree:5 circle:1 plotted:1 uncertain:1 instance:1 column:2 soft:1 modeling:1 cover:2 zn:1 assignment:3 introducing:1 subset:23 successful:1 motivating:1 perturbed:1 synthetic:3 chooses:1 st:4 international:2 systematic:2 barbados:1 management:1 hn:1 russia:1 american:1 uruguay:1 kingman:1 return:1 japan:1 b2:5 coefficient:1 satisfy:2 depends:1 h1:1 red:1 start:1 elaborated:1 correspond:2 yield:2 conceptually:1 bayesian:9 carlo:1 inform:1 reach:1 ed:1 definition:9 sampled:2 dataset:8 proved:1 austria:1 knowledge:1 emerges:1 car:1 segmentation:5 subtle:1 greece:1 ea:11 norway:1 zimbabwe:1 methodology:1 asia:1 botstein:1 formulation:3 nonparametrics:1 arranged:3 strongly:1 just:2 crp:7 dendrogram:6 d:1 morocco:1 chad:1 transport:1 jamaica:1 brings:1 reveal:1 gray:1 b3:8 name:1 usa:1 contain:3 concept:1 brown:1 assigned:1 chemical:1 neal:3 subordinator:1 peru:1 illustrative:1 iris:2 criterion:1 generalized:2 presenting:1 demonstrate:3 continent:3 egypt:1 interpreting:1 wise:1 novel:3 recently:1 fi:4 common:1 agglomeration:10 mt:4 belong:1 interpretation:2 tail:1 association:1 interpret:2 kluwer:1 significant:2 measurement:1 gibbs:1 grid:2 libya:1 itak:1 hp:4 portugal:1 mathematics:1 funded:1 stable:1 europe:1 similarity:2 base:1 igo:5 posterior:18 showed:1 orbanz:1 italy:1 verlag:1 binary:1 seen:1 minimum:5 determine:2 multiple:2 turkey:1 segmented:5 technical:1 academic:1 long:1 bangladesh:1 divided:1 concerning:1 mali:1 peace:1 basic:3 metric:1 poisson:3 arxiv:4 iteration:2 represent:6 histogram:1 yeung:2 addition:1 want:2 whereas:1 diagram:3 country:4 microarray:2 publisher:1 sr:12 induced:2 jordan:3 integer:3 call:1 near:1 counting:1 split:1 restaurant:1 zi:6 cow:1 inner:2 regarding:2 tub:1 blundell:1 expression:6 pca:2 linkage:2 ul:1 retrospective:1 penalty:1 e3:3 constitute:1 repeatedly:1 useful:6 clear:1 involve:1 detailed:1 aimed:1 netherlands:1 nonparametric:5 discount:1 s4:1 thailand:1 iran:1 induces:1 encyclopedia:2 medvedovic:4 glu:1 generate:3 http:1 supplied:1 sl:4 singapore:1 s3:6 dotted:1 per:9 bumgarner:3 diverse:1 write:1 discrete:1 taller:1 express:3 drawn:1 buhler:1 sum:6 year:2 letter:1 fourth:1 taxonomic:1 almost:1 decision:3 comparable:1 hi:1 vietnam:1 display:2 elucidated:1 diffuse:2 unlimited:1 generates:1 circumvented:1 department:2 according:5 belonging:2 conjugate:2 smaller:2 remain:2 bap:1 across:1 partitioned:1 making:1 s1:10 projecting:1 multiplicity:1 pr:16 equation:3 conjugacy:1 count:4 needed:1 know:2 serf:1 bolivia:1 gaussians:1 pakistan:1 apply:3 hierarchical:1 v2:1 appropriate:1 distinguished:2 alternative:1 buffet:2 encounter:1 rp:9 top:1 clustering:8 dirichlet:7 graphical:1 botswana:1 chinese:1 boun:1 ghahramani:2 question:1 quantity:2 already:1 concentration:1 czechoslovakia:1 bialek:1 erickson:1 dp:2 ireland:1 thank:1 separating:1 outer:2 me:4 topic:1 seven:1 agglomerative:2 denmark:1 assuming:1 besides:1 index:2 modeled:1 mexico:1 nc:4 difficult:3 sinica:1 potentially:1 countable:1 summarization:1 allowing:2 teh:4 observation:2 ideker:1 datasets:2 sm:4 markov:1 extended:1 communication:1 pdp:2 arbitrary:1 canada:1 introduced:1 pair:5 required:1 z1:1 conflict:1 merges:1 distinction:1 bar:1 eugenics:1 usually:1 flower:3 pattern:1 summarize:6 panama:1 including:1 indicator:1 nth:1 representing:3 scheme:1 spellman:1 lao:1 sethuraman:1 extract:4 occurence:8 sn:12 poland:1 prior:7 literature:2 understanding:1 acknowledgement:1 text:1 review:1 lecture:1 permutation:11 allocation:19 foundation:1 fiji:1 elliott:1 plotting:1 metabolic:2 systematically:1 cd:4 row:2 course:1 summary:2 last:1 truncation:1 guinea:1 contextspecific:1 allow:1 understand:1 india:1 taking:2 pitman:3 yor:1 distributed:1 slice:1 boundary:1 outermost:1 xn:2 cumulative:18 avoids:1 pillow:1 eisen:1 commonly:2 made:1 replicated:1 simplified:1 lebanon:1 approximate:1 gene:17 keep:2 r20:1 logic:1 christmas:1 b1:6 conceptual:1 assumed:1 tuples:1 xi:4 latent:1 quantifies:1 investigated:1 significance:1 statistica:1 linearly:1 motivation:1 s2:9 hyperparameters:3 profile:3 whole:2 repeated:1 dyadic:1 x1:2 oman:1 referred:1 syria:1 deployed:1 breaking:1 erkan:1 young:3 ethiopia:1 shade:1 gazic:3 specific:4 showing:1 symbol:1 r2:3 dk:1 experimented:1 shafee:1 grouping:2 essential:1 workshop:1 merging:1 fragmentation:1 venezuela:1 subtree:1 ribosomal:1 nk:4 entropy:53 intersection:1 explore:1 ordered:1 bo:3 springer:3 satisfies:1 determines:1 superclass:2 sorted:1 formulated:1 quantifying:1 exposition:1 rwanda:1 fisher:1 feasible:1 hard:1 infinite:17 determined:2 averaging:1 called:1 total:1 specie:2 experimental:2 shannon:4 meaningful:1 east:1 exception:1 guo:1 assessed:1 bioinformatics:6 indian:2 constructive:1 |
4,524 | 5,094 | Dynamic Clustering via Asymptotics of the
Dependent Dirichlet Process Mixture
Trevor Campbell
MIT
Cambridge, MA 02139
Miao Liu
Duke University
Durham, NC 27708
[email protected]
[email protected]
Brian Kulis
Ohio State University
Columbus, OH 43210
Jonathan P. How
MIT
Cambridge, MA 02139
Lawrence Carin
Duke University
Durham, NC 27708
[email protected]
[email protected]
[email protected]
Abstract
This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing
an unknown number of evolving clusters. The algorithm is derived via a lowvariance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM,
and provides a hard clustering with convergence guarantees similar to those of the
k-means algorithm. Empirical results from a synthetic test with moving Gaussian
clusters and a test with real ADS-B aircraft trajectory data demonstrate that the algorithm requires orders of magnitude less computational time than contemporary
probabilistic and hard clustering algorithms, while providing higher accuracy on
the examined datasets.
1
Introduction
The Dirichlet process mixture model (DPMM) is a powerful tool for clustering data that enables
the inference of an unbounded number of mixture components, and has been widely studied in the
machine learning and statistics communities [1?4]. Despite its flexibility, it assumes the observations are exchangeable, and therefore that the data points have no inherent ordering that influences
their labeling. This assumption is invalid for modeling temporally/spatially evolving phenomena, in
which the order of the data points plays a principal role in creating meaningful clusters. The dependent Dirichlet process (DDP), originally formulated by MacEachern [5], provides a prior over such
evolving mixture models, and is a promising tool for incrementally monitoring the dynamic evolution of the cluster structure within a dataset. More recently, a construction of the DDP built upon
completely random measures [6] led to the development of the dependent Dirichlet process Mixture
model (DDPMM) and a corresponding approximate posterior inference Gibbs sampling algorithm.
This model generalizes the DPMM by including birth, death and transition processes for the clusters
in the model.
The DDPMM is a Bayesian nonparametric (BNP) model, part of an ever-growing class of probabilistic models for which inference captures uncertainty in both the number of parameters and
their values. While these models are powerful in their capability to capture complex structures in
data without requiring explicit model selection, they suffer some practical shortcomings. Inference
techniques for BNPs typically fall into two classes: sampling methods (e.g., Gibbs sampling [2]
1
or particle learning [4]) and optimization methods (e.g., variational inference [3] or stochastic variational inference [7]). Current methods based on sampling do not scale well with the size of the
dataset [8]. Most optimization methods require analytic derivatives and the selection of an upper
bound on the number of clusters a priori, where the computational complexity increases with that
upper bound [3, 7]. State-of-the-art techniques in both classes are not ideal for use in contexts where
performing inference quickly and reliably on large volumes of streaming data is crucial for timely
decision-making, such as autonomous robotic systems [9?11]. On the other hand, many classical
clustering methods [12?14] scale well with the size of the dataset and are easy to implement, and
advances have recently been made to capture the flexibility of Bayesian nonparametrics in such
approaches [15]. However, as of yet there is no classical algorithm that captures dynamic cluster
structure with the same representational power as the DDP mixture model.
This paper discusses the Dynamic Means algorithm, a novel hard clustering algorithm for spatiotemporal data derived from the low-variance asymptotic limit of the Gibbs sampling algorithm for
the dependent Dirichlet process Gaussian mixture model. This algorithm captures the scalability
and ease of implementation of classical clustering methods, along with the representational power
of the DDP prior, and is guaranteed to converge to a local minimum of a k-means-like cost function.
The algorithm is significantly more computationally tractable than Gibbs sampling, particle learning,
and variational inference for the DDP mixture model in practice, while providing equivalent or better
clustering accuracy on the examples presented. The performance and characteristics of the algorithm
are demonstrated in a test on synthetic data, with a comparison to those of Gibbs sampling, particle
learning and variational inference. Finally, the applicability of the algorithm to real data is presented
through an example of clustering a spatio-temporal dataset of aircraft trajectories recorded across
the United States.
2
Background
The Dirichlet process (DP) is a prior over mixture models, where the number of mixture components
is notR known a priori[16]. In general, we denote D ? DP(?), where ?? ? R+ and ? : ? ?
R+ , ? d? = ?? are the concentration parameter and base measure of the DP, respectively. If
D ? DP, then D = {(?k , ?k )}?
k=0 ? ? ? R+ , where ?k ? ? and ?k ? R+ [17]. The reader is
directed to [1] for a more thorough coverage of Dirichlet processes.
The dependent Dirichlet process (DDP)[5], an extension to the DP, is a prior over evolving mixture
models. Given a Poisson process construction[6], the DDP essentially forms a Markov chain of DPs
(D1 , D2 , . . . ), where the transitions are governed by a set of three stochastic operations: Points ?k
may be added, removed, and may move during each step of the Markov chain. Thus, they become
parameterized by time, denoted by ?kt . In slightly more detail, if Dt is the DP at time step t, then
the following procedure defines the generative model of Dt conditioned on Dt?1 ? DP(?t?1 ):
1. Subsampling: Define a function q : ? ? [0, 1]. Then for each point (?, ?) ? Dt?1 ,
sample a Bernoulli distribution b? ? Be(q(?)). Set Dt0 to be the collection of points (?, ?)
such
that b? = 1, and renormalize the weights. Then Dt0 ? DP(q?t?1 ), where (q?)(A) =
R
q(?)?(d?).
A
2. Transition: Define a distribution T : ? ? ? ? R+ . For each point (?, ?) ? Dt0 , sample
00
0
00
?0 ? T (?0 |?), and set
R D
R t to be the collection of points (? , ?). Then Dt ? DP(T q?t?1 ),
where (T ?)(A) = A ? T (?0 |?)?(d?).
3. Superposition: Sample F ? DP(?), and sample (cD , cF ) ? Dir(T q?t?1 (?), ?(?)).
Then set Dt to be the union of (?, cD ?) for all (?, ?) ? Dt00 and (?, cF ?) for all (?, ?) ? F .
Thus, Dt is a random convex combination of Dt00 and F , where Dt ? DP(T q?t?1 + ?).
If the DDP is used as a prior over a mixture model, these three operations allow new mixture components to arise over time, and old mixture components to exhibit dynamics and perhaps disappear
over time. As this is covered thoroughly in [6], the mathematics of the underlying Poisson point
process construction are not discussed in more depth in this work. However, an important result of
using such a construction is the development of an explicit posterior for Dt given observations of the
points ?kt at timestep t. For each point k that was observed in D? for some ? : 1 ? ? ? t, define:
nkt ? N as the number of observations of point k in timestep t; ckt ? N as the number of past
2
Pt?1
observations of point k prior to timestep t, i.e. ckt = ? =1 nk? ; qkt ? (0, 1) as the subsampling
weight on point k at timestep t; and ?tk as the number of time steps that have elapsed since point k
was last observed. Further, let ?t be the measure for unobserved points at time step t. Then,
!
X
X
Dt |Dt?1 ? DP ?t +
qkt ckt T (? |?k(t??tk ) ) +
(ckt + nkt )??kt
(1)
k:nkt =0
k:nkt >0
where ckt = 0 for any point k that was first observed during timestep t. This posterior leads directly
to the development of a Gibbs sampling algorithm for the DDP, whose low-variance asymptotics are
discussed further below.
3
Asymptotic Analysis of the DDP Mixture
The dependent Dirichlet process Gaussian mixture model (DDP-GMM) serves as the foundation
upon which the present work is built. The generative model of a DDP-GMM at time step t is
{?kt , ?kt }?
k=1 ? DP(?t )
?
t
{zit }N
i=1 ? Categorical({?kt }k=1 )
(2)
t
{yit }N
i=1
? N (?zit t , ?I)
where ?kt is the mean of cluster k, ?kt is the categorical weight for class k, yit is a d-dimensional
observation vector, zit is a cluster label for observation i, and ?t is the base measure from equation
(1). Throughout the rest of this paper, the subscript kt refers to quantities related to cluster k at time
step t, and subscript it refers to quantities related to observation i at time step t.
The Gibbs sampling algorithm for the DDP-GMM iterates between sampling labels zit for datapoints yit given the set of parameters {?kt }, and sampling parameters ?kt given each group of
data {yit : zit = k}. Assuming the transition model T is Gaussian, and the subsampling function q is constant, the functions and distributions used in the Gibbs sampling algorithm are: the
prior over cluster parameters, ? ? N (?, ?I); the likelihood of an observation given its cluster parameter, yit ? N (?kt , ?I); the distribution over the transitioned cluster parameter given its last
known location after ?tk time steps, ?kt ? N (?k(t??tk ) , ??tk I); and the subsampling function
q(?) = q ? (0, 1). Given these functions and distributions, the low-variance asymptotic limits
(i.e. ? ? 0) of these two steps are discussed in the following sections.
3.1
Setting Labels Given Parameters
In the label sampling step, a datapoint yit can either create a new cluster, join a current cluster, or
revive an old, transitioned cluster. Using the distributions defined previously, the label assignment
probabilities are
?
2
it ??||
?
?t (2?(? + ?))?d/2 exp ? ||y2(?+?)
k =K +1
?
?
?
2
kt ||
(ckt + nkt )(2??)?d/2 exp ? ||yit ??
nkt > 0
p(zit = k| . . . ) ?
(3)
2?
?
||y ??
2
?
||
?
it
k(t??t
)
k
? qkt ckt (2?(? + ??tk ))?d/2 exp ?
nkt = 0
2(?+??tk )
t
where qkt = q ?tk due to the fact that q(?) is constant over ?, and ?t = ?? 1?q
1?q where ?? is the
concentration parameter for the innovation process, Ft . The low-variance asymptotic limit of this
label assignment step yields meaningful assignments as long as ?? , ?, and q vary appropriately with
?; thus, setting ?? , ?, and q as follows (where ?, ? and Q are positive constants):
Q
?
?? = (1 + ?/?)d/2 exp ? 2?
, ? = ? ?, q = exp ? 2?
(4)
yields the following assignments in the limit as ? ? 0:
?
2
if ?k instantiated
? ||yit ? ?kt ||
||yit ??k(t??tk ) ||2
zit = arg min {Jk } , Jk =
(5)
if ?k old, uninstantiated .
? ?tk +1
? Q?tk +
k
?
if ?k new
In this assignment step, Q?tk acts as a cost penalty for reviving old clusters that increases with the
time since the cluster was last seen, ? ?tk acts as a cost reduction to account for the possible motion
of clusters since they were last instantiated, and ? acts as a cost penalty for introducing a new cluster.
3
3.2
Setting Parameters given Labels
In the parameter sampling step, the parameters are sampled using the distribution
p(?kt |{yit : zit = k}) ? p({yit : zit = k}|?kt )p(?kt )
(6)
There are two cases to consider when setting a parameter ?kt . Either ?tk = 0 and the cluster is new
in the current time step, or ?tk > 0 and the cluster was previously created, disappeared for some
amount of time, and then was revived in the current time step.
New Cluster Suppose cluster k is being newly created. In this case, ?kt ? N (?, ?). Using the
fact that a normal prior is conjugate a normal likelihood, the closed-form posterior for ?kt is
?post
?kt |{yit : zit = k} ? N (?post , ?post )
Pnkt
?1
yit
?
1 nkt
= ?post
+ i=1
, ?post =
+
?
?
?
?
(7)
Pnkt
( i=1
yit ) def
= mkt
nkt
(8)
Then letting ? ? 0,
?kt =
where mkt is the mean of the observations in the current timestep.
Revived Cluster Suppose there are ?tk time steps where cluster k was not observed, but there
are now nkt data points with mean mkt assigned to it in this time step. In this case,
Z
p(?kt ) = T (?kt |?)p(?) d?, ? ? N (?0 , ? 0 ).
(9)
?
Again using conjugacy of normal likelihoods and priors,
?post = ?post
?kt |{yit : zit = k} ? N (?post , ?post )
Pnkt
?1
1
nkt
?
i=1 yit
+
, ?post =
+
??tk + ? 0
?
??tk + ? 0
?
0
(10)
Similarly to the label assignment step, let ? = ? ?. Then as long as ? 0 = ?/w, w > 0 (which holds
if equation (10) is used to recursively keep track of the parameter posterior), taking the asymptotic
limit of this as ? ? 0 yields:
?kt =
?0 (w?1 + ?tk ? )?1 + nkt mkt
(w?1 + ?tk ? )?1 + nkt
(11)
that is to say, the revived ?kt is a weighted average of estimates using current timestep data and
previous timestep data. ? controls how much the current data is favored - as ? increases, the weight
on current data increases, which is explained by the fact that our uncertainty in where the old ?0
transitioned to increases with ? . It is also noted that if ? = 0, this reduces to a simple weighted
average using the amount of data collected as weights.
Combined Update Combining the updates for new cluster parameters and old transitioned cluster
parameters yields a recursive update scheme:
?1
?kt = (wk(t??tk ) )?1 + ?tk ?
?k0 = mk0
?
? + nkt mkt
and ?kt = k(t??tk ) kt
(12)
wk0 = nk0
?kt + nkt
wkt = ?kt + nkt
where time step 0 here corresponds to when the cluster is first created. An interesting interpre?1
tation of this update is that it behaves like a standard Kalman filter, in which wkt
serves as the
current estimate variance, ? serves as the process noise variance, and nkt serves as the inverse of the
measurement variance.
4
Algorithm 1 Dynamic Means
tf
{Yt }t=1
,
Input:
Q, ?, ?
C1 ? ?
for t = 1 ? tf do
(Kt , Zt , Lt ) ?C LUSTER(Yt , Ct , Q, ?, ? )
Ct+1 ?U PDATE C(Zt , Kt , Ct )
end for
tf
return {Kt , Zt , Lt }t=1
4
Algorithm 2 C LUSTER
Input: Yt , Ct , Q, ?, ?
Kt ? ?, Zt ? ?, L0 ? ?
for n = 1 ? ? do
(Zt , Kt ) ?A SSIGN L ABELS(Yt , Zt , Kt , Ct )
(Kt , Ln ) ?A SSIGN PARAMS(Yt , Zt , Ct )
if Ln = Ln?1 then
return Kt , Zt , Ln
end if
end for
The Dynamic Means Algorithm
In this section, some further notation is required for brevity:
t
Yt = {yit }N
i=1 ,
Kt = {(?kt , wkt ) : nkt > 0},
t
Zt = {zit }N
i=1
Ct = {(?tk , ?k(t??tk ) , wk(t??tk ) )}
(13)
where Yt and Zt are the sets of observations and labels at time step t, Kt is the set of currently active
clusters (some are new with ?tk = 0, and some are revived with ?tk > 0), and Ct is the set of old
cluster information.
4.1
Algorithm Description
As shown in the previous section, the low-variance asymptotic limit of the DDP Gibbs sampling
algorithm is a deterministic observation label update (5) followed by a deterministic, weighted leastsquares parameter update (12). Inspired by the original K-Means algorithm, applying these two
updates iteratively yields an algorithm which clusters a set of observations at a single time step
given cluster means and weights from past time steps (Algorithm 2). Applying Algorithm 2 to a
sequence of batches of data yields a clustering procedure that is able to track a set of dynamically
evolving clusters (Algorithm 1), and allows new clusters to emerge and old clusters to be forgotten.
While this is the primary application of Algorithm 1, the sequence of batches need not be a temporal
sequence. For example, Algorithm 1 may be used as an any-time clustering algorithm for large
datasets, where the sequence of batches is generated by selecting random subsets of the full dataset.
The A SSIGN PARAMS function is exactly the update from equation (12) applied to each k ? Kt .
Similarly, the A SSIGN L ABELS function applies the update from equation (5) to each observation;
however, in the case that a new cluster is created or an old one is revived by an observation, A S SIGN L ABELS also creates a parameter for that new cluster based on the parameter update equation
(12) with that single observation. Note that the performance of the algorithm depends on the order
in which A SSIGN L ABELS assigns labels. Multiple random restarts of the algorithm with different
assignment orders may be used to mitigate this dependence. The U PDATE C function is run after
clustering observations from each time step, and constructs Ct+1 by setting ?tk = 1 for any new or
revived cluster, and by incrementing ?tk for any old cluster that was not revived:
Ct+1 = {(?tk + 1, ?k(t??tk ) , wk(t??tk ) ) : k ? Ct , k ?
/ Kt } ? {(1, ?kt , wkt ) : k ? Kt }
(14)
An important question is whether this algorithm is guaranteed to converge while clustering data in
each time step. Indeed, it is; Theorem 1 shows that a particular cost function Lt monotonically
decreases under the label and parameter updates (5) and (12) at each time step. Since Lt ? 0, and it
is monotonically decreased by Algorithm 2, the algorithm converges. Note that the Dynamic Means
is only guaranteed to converge to a local optimum, similarly to the k-means[12] and DP-Means[15]
algorithms.
Theorem 1. Each iteration in Algorithm 2 monotonically decreases the cost function Lt , where
?
?
Weighted-Prior Sum-Squares Cost
New Cost
Revival Cost
z
}|
{
}|
{
z }| {
X ?z
X
?
?? [?tk = 0] + Q?tk + ?kt ||?kt ? ?k(t??t ) ||22 +
Lt =
||yit ? ?kt ||22 ?
(15)
k
?
?
yit ?Yt
zit =k
k?Kt
The cost function is comprised of a number of components for each currently active cluster k ? Kt :
A penalty for new clusters based on ?, a penalty for old clusters based on Q and ?tk , and finally
5
a prior-weighted sum of squared distance cost for all the observations in cluster k. It is noted that
for new clusters, ?kt = ?k(t??tk ) since ?tk = 0, so the least squares cost is unweighted. The
A SSIGN PARAMS function calculates this cost function in each iteration of Algorithm 2, and the
algorithm terminates once the cost function does not decrease during an iteration.
4.2
Reparameterizing the Algorithm
In order to use the Dynamic Means algorithm, there are three free parameters to select: ?, Q, and ? .
While ? represents how far an observation can be from a cluster before it is placed in a new cluster,
and thus can be tuned intuitively, Q and ? are not so straightforward. The parameter Q represents
a conceptual added distance from any data point to a cluster for every time step that the cluster is
not observed. The parameter ? represents a conceptual reduction of distance from any data point
to a cluster for every time step that the cluster is not observed. How these two quantities affect the
algorithm, and how they interact with the setting of ?, is hard to judge.
Instead of picking Q and ? directly, the algorithm may be reparameterized by picking NQ , k? ? R+ ,
NQ > 1, k? ? 1, and given a choice of ?, setting
Q =?/NQ
?=
NQ (k? ? 1) + 1
.
NQ ? 1
(16)
If Q and ? are set in this manner, NQ represents the number (possibly fractional) of time steps a
cluster can be unobserved before the label update (5) will never revive that cluster, and k? ? represents the maximum squared distance away from a cluster center such that after a single time step, the
label update (5) will revive that cluster. As NQ and k? are specified in terms of concrete algorithmic
behavior, they are intuitively easier to set than Q and ? .
5
Related Work
Prior k-means clustering algorithms that determine the number of clusters present in the data have
primarily involved a method for iteratively modifying k using various statistical criteria [13, 14, 18].
In contrast, this work derives this capability from a Bayesian nonparametric model, similarly to
the DP-Means algorithm [15]. In this sense, the relationship between the Dynamic Means algorithm and the dependent Dirichlet process [6] is exactly that between the DP-Means algorithm and
Dirichlet process [16], where the Dynamic Means algorithm may be seen as an extension to the
DP-Means that handles sequential data with time-varying cluster parameters. MONIC [19] and
MC3 [20] have the capability to monitor time-varying clusters; however, these methods require datapoints to be identifiable across timesteps, and determine cluster similarity across timesteps via the
commonalities between label assignments. The Dynamic Means algorithm does not require such
information, and tracks clusters essentially based on similarity of the parameters across timesteps.
Evolutionary clustering [21, 22], similar to Dynamic Means, minimizes an objective consisting of
a cost for clustering the present data set and a cost related to the comparison between the current
clustering and past clusterings. The present work can be seen as a theoretically-founded extension
of this class of algorithm that provides methods for automatic and adaptive prior weight selection,
forming correspondences between old and current clusters, and for deciding when to introduce new
clusters. Finally, some sequential Monte-Carlo methods (e.g. particle learning [23] or multi-target
tracking [24, 25]) can be adapted for use in the present context, but suffer the drawbacks typical of
particle filtering methods.
6
6.1
Applications
Synthetic Gaussian Motion Data
In this experiment, moving Gaussian clusters on [0, 1] ? [0, 1] were generated synthetically over a
period of 100 time steps. In each step, there was some number of clusters, each having 15 data points.
The data points were sampled from a symmetric Gaussian distribution with a standard deviation of
0.05. Between time steps, the cluster centers moved randomly, with displacements sampled from
the same distribution. At each time step, each cluster had a 0.05 probability of being destroyed.
6
10
6
250
5
200
9
8
50
0.400
k?
TQ
0.240
0.400
0.480
0.560
0.320
56
2
0
0 ?3.8 ?3.6 ?3.4 ?3.2 ?3.0 ?2.8 ?2.6 ?2.4 ?2.2
1 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 ?4.0
CPU Time (log10 s) per Step
?
0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16
?
(a)
(b)
60
40
20
100
10
10
# Clusters
(d)
15
20
80
?1
10?2
Gibbs
VB
PL
DynMeans
?4
10?50
60
40
10?3
10
5
100
101
CPU Time (s) per Step
% Label Accuracy
80
(c)
102
Gibbs
VB
PL
DynMeans
% Accuracy
100
00
100
0
0.
150
0
2
0
2
0.3
3
3
48
24
0.
4
0.
0
20
0.480
5
4
0
0.4
0.560
6
0.3
0.320
7
5
10
# Clusters
(e)
15
20
20
10?5
Gibbs
Gibbs NC
DynMeans
DynMeans NC
10?4
10?3 10?2 10?1 100
CPU Time (s) per Step
101
102
(f)
Figure 1: (1a - 1c): Accuracy contours and CPU time histogram for the Dynamic Means algorithm. (1d - 1e): Comparison with Gibbs
sampling, variational inference, and particle learning. Shaded region indicates 1? interval; in (1e), only upper half is shown. (1f): Comparison
of accuracy when enforcing (Gibbs, DynMeans) and not enforcing (Gibbs NC, DynMeans NC) correct cluster tracking.
This data was clustered with Dynamic Means (with 3 random assignment ordering restarts), DDPGMM Gibbs sampling [6], variational inference [3], and particle learning [4] on a computer with
an Intel i7 processor and 16GB of memory. First, the number of clusters was fixed to 5, and the
parameter space of each algorithm was searched for the best possible cluster label accuracy (taking
into account correct cluster tracking across time steps). The results of this parameter sweep for
the Dynamic Means algorithm with 50 trials at each parameter setting are shown in Figures 1a?1c.
Figures 1a and 1b show how the average clustering accuracy varies with the parameters after fixing
either k? or TQ to their values at the maximum accuracy parameter setting over the full space. The
Dynamic Means algorithm had a similar robustness with respect to variations in its parameters as
the comparison algorithms. The histogram in Figure 1c demonstrates that the clustering speed is
robust to the setting of parameters. The speed of Dynamic Means, coupled with the smoothness of
its performance with respect to its parameters, makes it well suited for automatic tuning [26].
Using the best parameter setting for each algorithm, the data as described above were clustered in
50 trials with a varying number of clusters present in the data. For the Dynamic Means algorithm,
parameter values ? = 0.04, TQ = 6.8, and k? = 1.01 were used, and the algorithm was again given
3 attempts with random labeling assignment orders, where the lowest cost solution of the 3 was
picked to proceed to the next time step. For the other algorithms, the parameter values ? = 1 and
q = 0.05 were used, with a Gaussian transition distribution variance of 0.05. The number of samples
for the Gibbs sampling algorithm was 5000 with one recorded for every 5 samples, the number of
particles for the particle learning algorithm was 100, and the variational inference algorithm was run
to a tolerance of 10?20 with the maximum number of iterations set to 5000.
In Figures 1d and 1e, the labeling accuracy and clustering time (respectively) for the algorithms is
shown. The sampling algorithms were handicapped to generate Figure 1d; the best posterior sample
in terms of labeling accuracy was selected at each time step, which required knowledge of the true
labeling. Further, the accuracy computation included enforcing consistency across timesteps, to
allow tracking individual cluster trajectories. If this is not enforced (i.e. accuracy considers each
time step independently), the other algorithms provide accuracies more comparable to those of the
Dynamic Means algorithm. This effect is demonstrated in Figure 1f, which shows the time/accuracy
tradeoff for Gibbs sampling (varying the number of samples) and Dynamic Means (varying the
number of restarts). These examples illustrate that Dynamic Means outperforms standard inference
algorithms in both label accuracy and computation time for cluster tracking problems.
7
0.2
MSP
ORD JFK
0.1
0.0
LAX
?0.1
?0.2
HOU
0.0
MIA
?0.1
Cluster #
SEA
12
11
10
9
8
7
6
5
4
3
2
1
0
Fri
Sat
Sun
Mon
UTC Date
Tue
Wed
Thu
Fri
?0.2
?0.3
Figure 2: Results of the GP aircraft trajectory clustering. Left: A map (labeled with major US city airports) showing the overall aircraft flows
for 12 trajectories, with colors and 1? confidence ellipses corresponding to takeoff region (multiple clusters per takeoff region), colored dots
indicating mean takeoff position for each cluster, and lines indicating the mean trajectory for each cluster. Right: A track of plane counts for
the 12 clusters during the week, with color intensity proportional to the number of takeoffs at each time.
?0.4
?0.5
?0.2
6.2
?0.1
0.0
0.1
0.2
Aircraft Trajectory Clustering
In this experiment, the Dynamic Means algorithm was used to find the typical spatial and temporal patterns in the motions of commercial aircraft. Automatic dependent surveillance-broadcast
(ADS-B) data, including plane identification, timestamp, latitude, longitude, heading and speed,
was collected from all transmitting planes across the United States during the week from 2013-3-22
1:30:0 to 2013-3-28 12:0:0 UTC. Then, individual ADS-B messages were connected together based
on their plane identification and timestamp to form trajectories, and erroneous trajectories were filtered based on reasonable spatial/temporal bounds, yielding 17,895 unique trajectories. Then, for
each trajectory, a Gaussian process was trained using the latitude and longitude of each ADS-B
point along the trajectory as the inputs and the North and East components of plane velocity at those
points as the outputs. Next, the mean latitudinal and longitudinal velocities from the Gaussian process were queried for each point on a regular lattice across the USA (10 latitudes and 20 longitudes),
and used to create a 400-dimensional feature vector for each trajectory. Of the resulting 17,895
feature vectors, 600 were hand-labeled (each label including a confidence weight in [0, 1]). The
feature vectors were clustered using the DP-Means algorithm on the entire dataset in a single batch,
and using Dynamic Means / DDPGMM Gibbs sampling (with 50 samples) with half-hour takeoff
window batches.
1: Mean computational time & accuracy
The results of this exercise are provided in Figure 2 and Table 1. Table
on hand-labeled aircraft trajectory data
Figure 2 shows the spatial and temporal properties of the 12 most
popular clusters discovered by Dynamic Means, demonstrating
Alg.
% Acc. Time (s)
that the algorithm successfully identified major flows of commerDynM
55.9
2.7 ? 102
cial aircraft across the US. Table 1 corroborates these qualitative
DPM
55.6
3.1 ? 103
results with a quantitative comparison of the computation time
Gibbs
36.9
1.4 ? 104
and accuracy for the three algorithms tested over 20 trials. The
confidence-weighted accuracy was computed by taking the ratio between the sum of the weights
for correctly labeled points and the sum of all weights. The DDPGMM Gibbs sampling algorithm
was handicapped as described in the synthetic experiment section. Of the three algorithms, Dynamic
Means provided the highest labeling accuracy, while requiring orders of magnitude less computation
time than both DP-Means and DDPGMM Gibbs sampling.
7
Conclusion
This work developed a clustering algorithm for batch-sequential data containing temporally evolving
clusters, derived from a low-variance asymptotic analysis of the Gibbs sampling algorithm for the
dependent Dirichlet process mixture model. Synthetic and real data experiments demonstrated that
the algorithm requires orders of magnitude less computational time than contemporary probabilistic
and hard clustering algorithms, while providing higher accuracy on the examined datasets. The
speed of inference coupled with the convergence guarantees provided yield an algorithm which
is suitable for use in time-critical applications, such as online model-based autonomous planning
systems.
Acknowledgments
This work was supported by NSF award IIS-1217433 and ONR MURI grant N000141110688.
8
References
[1] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, New York, 2010.
[2] Radford M. Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9(2):249?265, 2000.
[3] David M. Blei and Michael I. Jordan. Variational inference for dirichlet process mixtures. Bayesian
Analysis, 1(1):121?144, 2006.
[4] Carlos M. Carvalho, Hedibert F. Lopes, Nicholas G. Polson, and Matt A. Taddy. Particle learning for
general mixtures. Bayesian Analysis, 5(4):709?740, 2010.
[5] Steven N. MacEachern. Dependent nonparametric processes. In Proceedings of the Bayesian Statistical
Science Section. American Statistical Association, 1999.
[6] Dahua Lin, Eric Grimson, and John Fisher. Construction of dependent dirichlet processes based on
poisson processes. In Neural Information Processing Systems, 2010.
[7] Matt Hoffman, David Blei, Chong Wang, and John Paisley. Stochastic variational inference. arXiv ePrint
1206.7051, 2012.
[8] Finale Doshi-Velez and Zoubin Ghahramani. Accelerated sampling for the indian buffet process. In
Proceedings of the International Conference on Machine Learning, 2009.
[9] Felix Endres, Christian Plagemann, Cyrill Stachniss, and Wolfram Burgard. Unsupervised discovery of
object classes from range data using latent dirichlet allocation. In Robotics Science and Systems, 2005.
[10] Matthias Luber, Kai Arras, Christian Plagemann, and Wolfram Burgard. Classifying dynamic objects:
An unsupervised learning approach. In Robotics Science and Systems, 2004.
[11] Zhikun Wang, Marc Deisenroth, Heni Ben Amor, David Vogt, Bernard Sch?olkopf, and Jan Peters. Probabilistic modeling of human movements for intention inference. In Robotics Science and Systems, 2008.
[12] Stuart P. Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129?
137, 1982.
[13] Dan Pelleg and Andrew Moore. X-means: Extending k-means with efficient estimation of the number of
clusters. In Proceedings of the 17th International Conference on Machine Learning, 2000.
[14] Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the number of clusters in a data set
via the gap statistic. Journal of the Royal Statistical Society B, 63(2):411?423, 2001.
[15] Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via bayesian nonparametrics.
In Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, Scotland,
2012.
[16] Thomas S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of Statistics,
1(2):209?230, 1973.
[17] Jayaram Sethuraman. A constructive definition of dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[18] Tsunenori Ishioka. Extended k-means with an efficient estimation of the number of clusters. In Proceedings of the 2nd International Conference on Intelligent Data Engineering and Automated Learning, pages
17?22, 2000.
[19] Myra Spiliopoulou, Irene Ntoutsi, Yannis Theodoridis, and Rene Schult. Monic - modeling and monitoring cluster transitions. In Proceedings of the 12th International Conference on Knowledge Discovering
and Data Mining, pages 706?711, 2006.
[20] Panos Kalnis, Nikos Mamoulis, and Spiridon Bakiras. On discovering moving clusters in spatio-temporal
data. In Proceedings of the 9th International Symposium on Spatial and Temporal Databases, pages
364?381. Springer, 2005.
[21] Deepayan Chakraborti, Ravi Kumar, and Andrew Tomkins. Evolutionary clustering. In Proceedings of
the SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.
[22] Kevin Xu, Mark Kliger, and Alfred Hero III. Adaptive evolutionary clustering. Data Mining and Knowledge Discovery, pages 1?33, 2012.
[23] Carlos M. Carvalho, Michael S. Johannes, Hedibert F. Lopes, and Nicholas G. Polson. Particle learning
and smoothing. Statistical Science, 25(1):88?106, 2010.
[24] Carine Hue, Jean-Pierre Le Cadre, and Patrick P?erez. Tracking multiple objects with particle filtering.
IEEE Transactions on Aerospace and Electronic Systems, 38(3):791?812, 2002.
[25] Jaco Vermaak, Arnaud Doucet, and Partick P?erez. Maintaining multi-modality through mixture tracking.
In Proceedings of the 9th IEEE International Conference on Computer Vision, 2003.
[26] Jasper Snoek, Hugo Larochelle, and Ryan Adams. Practical bayesian optimization of machine learning
algorithms. In Neural Information Processing Systems, 2012.
9
| 5094 |@word aircraft:8 kulis:3 trial:3 nd:1 vogt:1 d2:1 vermaak:1 recursively:1 reduction:2 liu:2 united:2 selecting:1 tuned:1 longitudinal:1 past:3 outperforms:1 current:11 yet:1 hou:1 john:2 enables:1 analytic:1 jfk:1 christian:2 update:13 generative:2 half:2 selected:1 nq:7 discovering:2 plane:5 scotland:1 wolfram:2 colored:1 filtered:1 blei:2 provides:3 iterates:1 cse:1 location:1 unbounded:1 along:2 become:1 symposium:1 qualitative:1 walther:1 dan:1 guenther:1 introduce:1 manner:1 theoretically:1 snoek:1 indeed:1 behavior:1 planning:1 bnp:1 growing:1 multi:2 inspired:1 utc:2 cpu:4 window:1 provided:3 estimating:1 underlying:1 notation:1 lowest:1 minimizes:1 developed:1 unobserved:2 guarantee:2 cial:1 temporal:7 thorough:1 forgotten:1 mitigate:1 act:3 every:3 quantitative:1 exactly:2 demonstrates:1 exchangeable:1 control:1 grant:1 positive:1 before:2 felix:1 local:2 engineering:1 limit:6 tation:1 despite:1 subscript:2 studied:1 examined:2 dynamically:1 shaded:1 ease:1 range:1 directed:1 practical:2 unique:1 acknowledgment:1 practice:1 union:1 implement:1 recursive:1 lcarin:1 procedure:2 displacement:1 jan:1 asymptotics:2 cadre:1 empirical:1 evolving:6 significantly:1 confidence:3 intention:1 refers:2 regular:1 zoubin:1 selection:3 context:2 influence:1 applying:2 yee:1 equivalent:1 deterministic:2 demonstrated:3 yt:8 center:2 map:1 straightforward:1 independently:1 convex:1 assigns:1 d1:1 oh:1 datapoints:2 handle:1 autonomous:2 variation:1 annals:1 construction:5 play:1 pt:1 suppose:2 target:1 duke:4 commercial:1 taddy:1 velocity:2 jk:2 muri:1 labeled:4 takeoff:5 observed:6 role:1 ft:1 steven:1 database:1 wang:2 capture:5 revisiting:1 region:3 revival:1 sun:1 connected:1 irene:1 ordering:2 decrease:3 contemporary:2 removed:1 highest:1 amor:1 grimson:1 movement:1 complexity:1 abel:4 dynamic:27 trained:1 upon:3 creates:1 eric:1 completely:1 k0:1 various:1 pdate:2 instantiated:2 uninstantiated:1 shortcoming:1 monte:1 labeling:6 kevin:1 birth:1 bnps:1 dt0:3 widely:1 whose:1 mon:1 say:1 plagemann:2 kai:1 jean:1 statistic:4 revive:3 gp:1 online:1 sequence:4 matthias:1 combining:1 date:1 flexibility:2 representational:2 description:1 moved:1 scalability:1 olkopf:1 convergence:2 cluster:87 optimum:1 extending:1 sea:1 disappeared:1 adam:1 converges:1 ben:1 tk:38 object:3 illustrate:1 andrew:2 fixing:1 interpre:1 zit:13 longitude:3 coverage:1 judge:1 larochelle:1 drawback:1 correct:2 filter:1 stochastic:3 modifying:1 human:1 require:3 clustered:3 brian:2 leastsquares:1 ryan:1 extension:3 pl:2 hold:1 normal:3 exp:5 deciding:1 lawrence:1 algorithmic:1 week:2 major:2 vary:1 commonality:1 estimation:2 label:19 currently:2 superposition:1 create:2 tf:3 tool:2 weighted:6 city:1 successfully:1 reparameterizing:1 mit:4 hoffman:1 gaussian:10 varying:5 surveillance:1 derived:3 l0:1 bernoulli:1 likelihood:3 indicates:1 contrast:1 sigkdd:1 sense:1 inference:17 dependent:12 streaming:1 ferguson:1 typically:1 entire:1 arg:1 overall:1 qkt:4 denoted:1 priori:2 favored:1 development:3 art:1 spatial:4 airport:1 smoothing:1 timestamp:2 construct:1 once:1 never:1 having:1 sampling:27 represents:5 stuart:1 unsupervised:2 carin:1 icml:1 intelligent:1 inherent:1 primarily:1 randomly:1 individual:2 consisting:1 tq:3 attempt:1 message:1 mining:3 chong:1 mixture:22 yielding:1 chain:3 kt:54 old:12 renormalize:1 modeling:3 assignment:10 lattice:1 cost:17 applicability:1 deviation:1 subset:1 introducing:1 mk0:1 comprised:1 burgard:2 theodoridis:1 varies:1 spatiotemporal:1 dir:1 params:3 synthetic:5 combined:1 thoroughly:1 endres:1 international:8 probabilistic:4 picking:2 michael:3 together:1 quickly:1 concrete:1 transmitting:1 again:2 squared:2 recorded:2 containing:2 broadcast:1 possibly:1 creating:1 american:1 derivative:1 return:2 account:2 lloyd:1 wk:3 north:1 ad:4 depends:1 picked:1 closed:1 thu:1 carlos:2 capability:3 timely:1 square:3 accuracy:21 variance:10 characteristic:1 yield:7 bayesian:9 identification:2 carlo:1 trajectory:14 monitoring:2 processor:1 mia:1 acc:1 datapoint:1 wed:1 trevor:2 definition:1 involved:1 doshi:1 sampled:3 newly:1 dataset:6 popular:1 knowledge:4 fractional:1 color:2 campbell:1 miao:2 higher:2 originally:1 dt:11 restarts:3 nonparametrics:2 ckt:7 hand:3 incrementally:1 defines:1 tdjc:1 columbus:1 perhaps:1 usa:1 effect:1 matt:2 requiring:2 y2:1 true:1 evolution:1 assigned:1 spatially:1 symmetric:1 death:1 iteratively:2 moore:1 neal:1 arnaud:1 during:5 noted:2 criterion:1 whye:1 demonstrate:1 motion:3 variational:9 ohio:2 novel:2 recently:2 behaves:1 jasper:1 hugo:1 volume:1 discussed:3 association:1 dahua:1 velez:1 measurement:1 rene:1 cambridge:2 gibbs:25 queried:1 paisley:1 smoothness:1 automatic:3 tuning:1 consistency:1 mathematics:1 similarly:4 erez:2 particle:12 had:2 dot:1 moving:3 similarity:2 base:2 patrick:1 posterior:6 onr:1 seen:3 minimum:1 nikos:1 converge:3 determine:2 period:1 monotonically:3 fri:2 ii:1 full:2 multiple:3 reduces:1 long:2 lin:1 post:10 award:1 ellipsis:1 calculates:1 panos:1 essentially:2 vision:1 poisson:3 arxiv:1 iteration:4 histogram:2 robotics:3 c1:1 background:1 decreased:1 interval:1 crucial:1 appropriately:1 sch:1 rest:1 modality:1 wkt:4 zhikun:1 dpm:1 eprint:1 flow:2 jordan:2 finale:1 spiliopoulou:1 ideal:1 synthetically:1 iii:1 easy:1 destroyed:1 automated:1 affect:1 timesteps:4 hastie:1 identified:1 luster:2 tradeoff:1 i7:1 whether:1 gb:1 penalty:4 suffer:2 peter:1 proceed:1 york:1 covered:1 johannes:1 amount:2 nonparametric:4 revived:7 encyclopedia:1 hue:1 wk0:1 generate:1 latitudinal:1 nsf:1 sign:1 track:4 per:4 correctly:1 tibshirani:1 alfred:1 group:1 demonstrating:1 monitor:1 yit:19 gmm:3 ravi:1 timestep:8 pelleg:1 sum:4 enforced:1 run:2 inverse:1 parameterized:1 powerful:2 uncertainty:2 lope:2 throughout:1 reader:1 reasonable:1 electronic:1 decision:1 vb:2 comparable:1 bound:3 def:1 ct:11 ddp:14 guaranteed:3 followed:1 correspondence:1 identifiable:1 adapted:1 speed:4 min:1 kumar:1 performing:1 combination:1 conjugate:1 across:9 slightly:1 terminates:1 making:1 explained:1 intuitively:2 computationally:1 equation:5 conjugacy:1 previously:2 ln:4 discus:1 count:1 letting:1 msp:1 tractable:1 hero:1 serf:4 end:3 lax:1 generalizes:1 operation:2 jhow:1 away:1 nicholas:2 pierre:1 batch:7 robustness:1 buffet:1 original:1 thomas:1 assumes:1 clustering:28 dirichlet:19 subsampling:4 cf:2 graphical:1 tomkins:1 maintaining:1 log10:1 ghahramani:1 disappear:1 classical:3 society:1 sweep:1 move:1 objective:1 added:2 quantity:3 question:1 concentration:2 primary:1 dependence:1 exhibit:1 evolutionary:3 dp:20 distance:4 tue:1 collected:2 considers:1 enforcing:3 assuming:1 kalman:1 relationship:1 deepayan:1 providing:3 ratio:1 innovation:1 nc:6 sinica:1 robert:1 polson:2 implementation:1 reliably:1 zt:10 dpmm:2 unknown:1 teh:1 upper:3 ord:1 observation:18 datasets:3 markov:3 reparameterized:1 extended:1 ever:1 discovered:1 community:1 intensity:1 david:3 required:2 specified:1 aerospace:1 elapsed:1 hour:1 able:1 below:1 pattern:1 latitude:3 handicapped:2 built:2 including:3 memory:1 royal:1 power:2 suitable:1 critical:1 nkt:18 scheme:1 temporally:2 sethuraman:1 created:4 categorical:2 coupled:2 prior:14 discovery:3 asymptotic:8 interesting:1 filtering:2 proportional:1 carvalho:2 allocation:1 foundation:1 classifying:1 cd:2 placed:1 last:4 free:1 supported:1 heading:1 jayaram:1 allow:2 fall:1 taking:3 hedibert:2 emerge:1 tolerance:1 edinburgh:1 depth:1 transition:6 unweighted:1 contour:1 made:1 collection:2 adaptive:2 founded:1 far:1 transaction:2 approximate:1 keep:1 doucet:1 robotic:1 active:2 sat:1 conceptual:2 n000141110688:1 spatio:2 corroborates:1 latent:1 table:3 promising:1 transitioned:4 robust:1 interact:1 alg:1 complex:1 marc:1 statistica:1 incrementing:1 noise:1 arise:1 xu:1 intel:1 join:1 position:1 explicit:2 exercise:1 governed:1 yannis:1 theorem:2 erroneous:1 showing:1 derives:1 quantization:1 sequential:4 magnitude:3 conditioned:1 nk:1 gap:1 durham:2 easier:1 suited:1 led:1 lt:6 pcm:1 forming:1 tracking:7 applies:1 springer:2 radford:1 corresponds:1 ma:2 jaco:1 formulated:1 invalid:1 monic:2 fisher:1 hard:5 mkt:5 included:1 typical:2 principal:1 bernard:1 meaningful:2 east:1 indicating:2 select:1 deisenroth:1 maceachern:2 searched:1 mark:1 jonathan:1 brevity:1 accelerated:1 indian:1 constructive:1 nk0:1 tested:1 phenomenon:1 |
4,525 | 5,095 | k-Prototype Learning for 3D Rigid Structures ?
Hu Ding
Department of Computer Science and Engineering
State University of New York at Buffalo
Buffalo, NY14260
[email protected]
Ronald Berezney
Department of Biological Sciences
State University of New York at Buffalo
Buffalo, NY14260
[email protected]
Jinhui Xu
Department of Computer Science and Engineering
State University of New York at Buffalo
Buffalo, NY14260
[email protected]
Abstract
In this paper, we study the following new variant of prototype learning, called
k-prototype learning problem for 3D rigid structures: Given a set of 3D rigid
structures, find a set of k rigid structures so that each of them is a prototype for
a cluster of the given rigid structures and the total cost (or dissimilarity) is minimized. Prototype learning is a core problem in machine learning and has a wide
range of applications in many areas. Existing results on this problem have mainly
focused on the graph domain. In this paper, we present the first algorithm for learning multiple prototypes from 3D rigid structures. Our result is based on a number
of new insights to rigid structures alignment, clustering, and prototype reconstruction, and is practically efficient with quality guarantee. We validate our approach
using two type of data sets, random data and biological data of chromosome territories. Experiments suggest that our approach can effectively learn prototypes in
both types of data.
1
Introduction
Learning prototype from a set of given or observed objects is a core problem in machine learning,
and has numerous applications in computer vision, pattern recognition, data mining, bioinformatics,
etc. A commonly used approach for this problem is to formulate it as an optimization problem and
determine an object (called pattern or prototype) so as to maximize the total similarity (or minimize
the total difference) with the input objects. Such computed prototypes are often used to classify or
index large-size structural data so that queries can be efficiently answered by only considering those
prototypes. Other important applications of prototype include reconstructing object from partially
observed snapshots and identifying common (or hidden) pattern from a set of data items.
In this paper, we study a new prototype learning problem called k-prototype learning for 3D rigid
structures, where a 3D rigid structure is a set of points in R3 whose pairwise distances remain
invariant under rigid transformation. Since our problem needs to determine k prototypes, it thus can
be viewed as two tightly coupled problems, clustering rigid structures and prototype reconstruction
for each cluster.
Our problem is motivated by an important application in biology for determining the spatial organization pattern of chromosome territories from a population of cells. Recent research in biology [3]
?
This research was supported in part by NSF under grant IIS-1115220.
has suggested that configuration of chromosome territories could significantly influence the cell
molecular processes, and are closely related to cancer-promoting chromosome translocations. Thus,
finding the spatial organization pattern of chromosome territories is a key step to understanding the
cell molecular processes [6,7,10,25]. Since the set of observed chromosome territories in each cell
can be represented as a 3D rigid structure, the problem can thus be formulated as a k-prototype
learning problem for a set of 3D rigid structures.
Related work: Prototype learning has a long and rich history. Most of the research has focused on
finding prototype in the graph domain. Jiang et al. [18] introduced the median graph concept, which
can be viewed as the prototype of a set of input graphs, and presented a genetic approach to solve
it. Later, Ferrer et al. [14] proposed another efficient method for median graph. Their idea is to first
embed the graphs into some metric space, and obtain the median using a recursive procedure. In the
geometric domain, quite a number of results have concentrated on finding prototypes from a set of
2D shapes [11,20,21,22]. A commonly used strategy in these methods is to first represent each shape
as a graph abstraction and then compute the median of the graph abstractions.
Our prototype learning problem is clearly related to the challenging 3D rigid structure clustering
and alignment problem [1,2,4,5,13,17]. Due to its complex nature, most of the existing approaches
are heuristic algorithms and thus cannot guarantee the quality of solution. There are also some
theoretical results [13] on this problem, but none of them is practical due to their high complexities.
Our contributions and main ideas: 1 Our main objective on this problem is to obtain a practical
solution which has guarantee on the quality of its solution. For this purpose, we first give a formal
definition of the problem and then consider two cases of the problem, 1-prototype learning and
k-prototype learning.
For 1-prototype learning, we first present a practical algorithm for the alignment problem. Our result
is based on a multi-level net technique which finds the proper Euler angles for the rigid transformation. With this alignment algorithm, we can then reduce the prototype learning problem to a new
problem called chromatic clustering (see Figure 1(b) and 1(c )), and present two approximate solutions for it. Finally, a local improvement algorithm is introduced to iteratively improve the quality
of the obtained prototype.
For k-prototype learning, a key challenge is how to avoid the high complexity associated with clustering 3D rigid structures. Our idea is to map each rigid structure to a point in some metric space
and build a correlation graph to capture their pairwise similarity. We show that the correlation graph
is metric; this means that we can reduce the rigid structure clustering problem to a metric k-median
clustering problem on the correlation graph. Once obtaining the clustering, we can then use the
1-prototype learning algorithm on each cluster to generate the desired prototype. We also provide
techniques to deal with several practical issues, such as the unequal sizes of rigid structures and the
weaker metric property caused by imperfect alignment computation for the correlation graph.
We validate our algorithms by using two types of datasets. The first is randomly generated datasets
and the second is a real biological dataset of chromosome territories. Experiments suggest that our
approach can effectively reduce the cost in prototype learning.
2
Preliminaries
In this section, we introduce several definitions which will be used throughout this paper.
Definition 1 (m-Rigid Structure). Let P = {p1 , ? ? ? , pm } be a set of m points in 3D space. P is
an m-rigid structure if the distance between any pair of vertices pi and pj in P remains the same
under any rigid transformation, including translation, rotation, reflection and their combinations,
on P . For any rigid transformation T , the image of P under T is denoted as T (P ).
Definition 2 (Bipartite Matching). Let S1 and S2 be two point-sets in 3D space with |S1 | = |S2 |,
and G = (U, V, E) be their induced complete bipartite graph, where each vertex in U (or V )
corresponds to a unique point in S1 (or S2 ), and each edge in E is associated with a weight equal to
the Euclidean distance of the corresponding two points. The bipartite matching of S1 and S2 , is the
one-to-one match from S1 to S2 with the minimum total matching weight (denoted as Cost(S1 , S2 ))
in G (see Figure 1(a)).
1
Due to space limit, we put some details and proofs in our full version paper.
Note that the bipartite matching can be computed using some existing algorithms, such as the Hungarian algorithm [24].
q3
V
U
q2
q1
(a)
(b)
(c)
Fig. 1: (a) An example of bipartite matching (red edges); (b) 4 point-sets with each in a different
color; (c ) chromatic clustering of point-sets in (b). The three clusters form a chromatic partition.
Definition 3 (Alignment). Let P and Q be two m-rigid structures in 3D space with points
{p1 , ? ? ? , pm } and {q1 , ? ? ? , qm } respectively. Their alignment is to find a rigid transformation T
for P so as to minimize the cost of the bipartite matching between T (P ) and Q. The minimum
(alignment) cost, minT Cost(T (P ), Q), is denoted by A(P, Q).
Definition 4 (k-Prototype Learning). Let P1 , ? ? ? Pn be n different m-rigid structures in 3D, and
k be a positive integer. k-prototype learning is to determine k m-rigid structures, Q1 , ? ? ? , Qk , so
as to minimize the following objective function
n
X
i=1
min A(Pi , Qj ).
1?j?k
(1)
From Definition 4, we know that the k-prototype learning problem can be viewed as first clustering
the rigid structures into k clusters and then build a prototype for each cluster so as to minimize the
total alignment cost.
3
1-Prototype learning
In this section, we consider the 1-prototype learning problem. We first overview the main steps of
our algorithm and then present the details in each subsection. Our algorithm is an iterative procedure.
In each iteration, it constructs a new prototype using the one from previous iteration, and reduces
the objective value. A final prototype is obtained once the objective value becomes stable.
Algorithm: 1-prototype learning
1. Randomly select a rigid structure from the input {P1 , ? ? ? , Pn } as the initial prototype Q.
2. Repeatedly perform the following steps until the objective value becomes stable.
(a) For each Pi , find the rigid transformation (approximately) realizing A(Pi , Q).
(b) Based on the new configuration (i.e., after the corresponding rigid transformation) of
each Pi , construct an updated prototype Q which minimizes the objective value.
Since both of 2(a) and 2(b) reduce the cost, the objective value would always decrease. In the next
two subsections, we discuss our ideas for Step 2(a) (alignment) and Step 2(b) (prototype reconstruction), respectively. Note that the above algorithm is different with generalized procrustes analysis
(GPA) [15], since the points from each Pi are not required to be pre-labeled in our algorithm, while
for GPA every input point should have an individual index. This is also the main difficulty for this
prototype learning problem.
3.1
Alignment
To determine the alignment of two rigid structures, one way is to use our recent theoretical algorithm
for point-set matching [13]. For any pair of point-sets P and Q in Rd space with m points each,
our algorithm outputs, in O( d12 m2d+2 log2d (m)) time, a rigid transformation T for P so that the
bipartite matching cost between T (P ) and Q is a (1 + )-approximation of the optimal alignment
cost between P and Q, where > 0 is a small constant. Applying this algorithm to our 3D rigid
structures, the running time becomes O( 19 m8 log6 (m)). The algorithm is based on following key
idea. First, we show that there exist 3 ?critical? points, called base, in each of P and Q, which
control the matching cost. Although the base cannot be explicitly identified, it is possible to obtain
it implicitly by considering all 3-tuples of the points in P and Q. The algorithm then builds an -net
around each base point to determine an approximate rigid transformation. Clearly, this theoretical
algorithm is efficient only when m is small. For large m, we use the following relaxation.
First, we change the edge weight in Definition 2 from Euclidean distance to squared Euclidean
distance. The following lemma shows some nice property of such a change.
Lemma 1. Let P = {p1 , ? ? ? , pm } and Q = {q1 , ? ? ? , qm } be two m-rigid structures in 3D space,
and T be the rigid transformation realizing the minimum bipartite matching cost (where the edge
weight is replaced by the squared Euclidean distance of the corresponding points in Definition 2).
Then, the mean points of T (P ) and Q coincide with each other.
Lemma 1 tells us that to align two rigid structures, we can first translate them to share one common
mean point, and then consider only the rotation in 3D space. (Note that we can ignore reflection in
the rigid transformation, as it can be captured by computing the alignment twice, one for the original
rigid structure, and the other for its mirror image.) Using Euler angles and 3D rotation matrix, we
can easily have the following fact.
Fact 1 Give any rotation matrix A in 3D, there are 3 angles ?, ?, ? ? (??, ?], and three matrices,
A1 , A2 and A3 such that A = A1 ? A2 ? A3 , where
?
?
1 0
0
A1 = ?0 cos ? ? sin ??
0 sin ? cos ?
?
,
?
cos ? 0 sin ?
1 0 ?
A2 = ? 0
? sin ? 0 cos ?
?
cos ? ? sin ?
and A3 = ? sin ? cos ?
0
0
?
0
0? .
1
From the above Fact 1, we know that the main issue for aligning two rigid structures P and Q is
to find three proper angles ?, ?, ? to minimize the cost. Clearly, this is a non-convex optimization
problem. Thus, we cannot use existing convex optimization methods to obtain an efficient solution.
One way to solve this problem is to build a dense enough -net (or grid) in the domain [??, ?]3 of
?, ?, ?, and evaluate each grid point to find the best possible solution. Clearly, this will be rather
inefficient when the number of grid points is huge. To obtain a practically efficient solution, our
strategy is to generalize the idea of building a dense net to recursively building a sparse net, which is
called multi-level net. At each level, we partition the current searching domain into a set of smaller
regions, which can be viewed as a sparse net, and evaluate some representative point in each of the
smaller region to determine its likelihood of containing the optimal point. The recursion will only
continue at the most likely N smaller regions (for some well selected parameter N ? 1 in practice).
In this way, we can save a great deal of time for searching the optimal point in those unlikely regions.
Below is the main steps of our approach.
1. Let S be the current searching space, which is initialized as [??, ?]3 , and t, N be two input
parameters. Recursively perform the following steps until the best objective value in two
consecutive recursive steps roughly remains the same.
(a) Uniformly partition S into t disjoint sub-regions S = S1 ? ? ? ? ? St .
(b) Randomly select a representative point si ? Si , and compute the alignment cost under
the rotational matrix corresponding to si via Hungarian algorithm.
(c) Choose the top N points with the minimum objective values from {s1 , ? ? ? , st }. Let
{st1 , ? ? ? , stN } be the chosen points.
SN
(d) Update S = i=1 Sti .
2. Output the rotation which yields the minimum objective value.
Why not use other alignment algorithms? There are several existing alignment algorithms for 3D
rigid structures, and each suffers from its own limitations. For example, the Iterative Closest Point
algorithm [4] is one of the most popular algorithms for alignment. However, it does not generate the
one-to-one match between the rigid structures. Instead, every point in one rigid structure is matched
to its nearest neighbor in the other rigid structure. This means that some point could match multiple
points in the other rigid structure. Obviously, this type of matching cannot meet our requirement,
especially in the biological application where chromosome territory is expected to match only one
chromosome. Similar problem also occurs in some other alignment algorithms [1,5,17]. Arun et
al. [2] presented an algebraic approach to find the best alignment between two 3D point-sets. Although their solution is a one-to-one match, it requires that the correspondence between the two
point-sets is known in advance, which is certainly not the case in our model. Branch-and-bound
(BB) approach [16] needs to grow a searching tree in the parameter space, and for each node it requires estimating the upper and lower bounds of the objective value in the corresponding sub-region.
But for our alignment problem, it is challenging to obtain such accurate estimations.
3.2
Prototype reconstruction
In this section, we discuss how to build a prototype from a set of 3D rigid structures. We first fix
the position of each Pi , and then construct a new prototype Q to minimize the objective function in
Definition 4. Our main idea is to introduce a new type of clustering problem called Chromatic Clustering which was firstly introduced by Ding and Xu [12], and reduce our prototype reconstruction
problem to it. We start with two definitions.
Definition 5 (Chromatic Partition). Let G = {G1 , ? ? ? , Gn } be a set of n point-sets with each Gi
consisting of m points in the space. A chromatic partition of G is a partition of the n ? m points into
m sets, U1 , ? ? ? , Um , such that each Uj contains exactly one point from each Gi .
Definition 6 (Chromatic Clustering). Let G = {G1 , ? ? ? , Gn } be a set of n point-sets with each
Gi consisting of m points in the space. The chromatic clustering of G is to find
median points
Pm
m P
{q1 , ? ? ? , qm } in the space and a chromatic partition U1 , ? ? ? , Um of G such that j=1 p?Uj ||p ?
qj || is minimized, where || ? || denotes the Euclidean distance.
From Definition 6, we know that chromatic clustering is quite similar to k-median clustering in
Euclidean space; the only difference is that it has the chromatic requirement, i.e., the obtained k
clusters should be a chromatic partition (see Figure 1(b) and 1(c )).
Reduction to chromatic clustering. Since the position of each Pi is fixed (note that with a slight
abuse of notation, we still use Pi to denote its image T (Pi ) under the rigid transformation T obtained
in Section 3.1), we can view each Pi as a point-set Gi , and the new prototype Q as the k median
points {q1 , ? ? ? , qm } in Definition 6. Further, if a point p ? Pi is matched to qj , then it is part of Uj .
Since we compute the one-to-one match, Uj contains exactly one point from each Pi , which implies
that {U1 , ? ? ? , Um } is a chromatic partition on G. Let pij be the one in Pi ? Uj . Then the objective
function in Definition 4 becomes
n X
m
X
||pij ? qj || =
i=1 j=1
m X
n
X
||pij ? qj || =
j=1 i=1
m X
X
||p ? qj ||,
(2)
j=1 p?Uj
which is exactly the objective function in Definition 6. Thus, we have the following theorem.
Theorem 1. Step 2(b) in the algorithm of 1-prototype learning is equivalent to solving a chromatic
clustering problem.
Next, we give two constant approximation algorithms for the chromatic clustering problem; one is
randomized, and the other is deterministic.
Theorem 2. Let G = {G1 , ? ? ? , Gn } be an instance of chromatic clustering with each Gi consisting
of m points in the space.
1. If Gl is randomly selected from G as the m median points, then with probability at least
1/2, Gl yields a 3-approximation for chromatic clustering on G.
2. If enumerating all point-sets in G as the m median points, there exists one Gi0 , which yields
a 2-approximation for chromatic clustering on G.
Proof. We consider the randomized algorithm first. Let {q1 , ? ? ? , qm } be the m median points in
the optimal solution, and U1 , ? ? ? , Um be the corresponding chromatic partition. Let pij = Gi ? Uj .
Since the objective value is the sum of the total cost from all point-sets {G1 , ? ? ? , Gn }, by Markov
inequality, the contribution from Gl should be no more than 2 times the average cost with probability
at least 1/2, i.e.,
m
X
j=1
||plj ? qj || ? 2
n
m
1 XX i
||pj ? qj ||.
n i=1 j=1
(3)
From (3) and triangle inequality, if replacing each qj by plj , the objective value becomes
m
n X
X
||pij ? plj || ?
m
n X
X
(||pij ? qj || + ||qj ? plj ||)
(4)
i=1 j=1
i=1 j=1
=
n X
m
X
||pij ? qj || + n ?
i=1 j=1
m
X
||qj ? plj || ? 3
n X
m
X
j=1
||pij ? qj ||,
(5)
i=1 j=1
where (4) follows from triangle inequality, and (5) follows from (3). Thus, the first part of the
theorem is true. The analysis for the deterministic algorithm is similar. The only difference is that
there must exist one point-set Gi0 whose contribution to the total cost is no more than the average
cost. Thus the constant in the right-hand side of (3) becomes 1 rather than 2, and consequently the
final approximation ratio in (5) turns to 2. Note that the desired Gi0 can be found by enumerating
all point-sets, and selecting the one having the smallest objective value.
t
u
Remark 1. Comparing the two approximation algorithms, we can see a tradeoff between the approximation ratio and the running time. The randomized algorithm has a larger approximation ratio, but a
linear dependence on n in its running time. The deterministic algorithm has a smaller approximation
ratio, but a quadratic dependence on n.
Local improvement. After finding a constant approximation, it is necessary to conduct some local
? = {?
improvement. An easy-to-implement method is the follows. Let Q
q1 , ? ? ? , q?m } be the initial
? and each Gi . This
constant approximation solution. Compute the bipartite matching between Q
?
?
?
yields a chromatic partition {U1 , ? ? ? , Um } on G, where each Uj consists of all the points matched
to q?j . By Definition 6, we know that qj should be the geometric median point of Uj in order to make
the objective value as low as possible. Thus, we can use the well known Weiszfelds algorithm [23] to
?j , and update q?j to be the corresponding geometric
compute the geometric median point for each U
median point. We can iteratively perform the following two steps, (1) computing the chromatic
partition and (2) generating the geometric median points, until the objective value becomes stable.
4
k-Prototype learning
In this section, we generalize the ideas for 1-prototype learning to k-prototype learning for some
k > 1. As mentioned in Section 1, our idea is to build a correlation graph. We first introduce the
following lemma.
Lemma 2. The alignment cost in Definition 3 satisfies the triangle inequality.
Correlation graph. We denote the correlation graph on the given m-rigid structures {P1 , ? ? ? , Pn }
as ? , which contains n vertices {v1 , ? ? ? , vn }. Each vi represents the rigid structure Pi , and the edge
connecting vi and vj has the weight equal to A(Pi , Pj ). From Lemma 2, we know that ? is a metric
graph. Thus, we have the following key theorem.
Theorem 3. Any ?-approximation solution for metric k-median clustering on ? yields a 2?approximation solution for the k-prototype learning problem on {P1 , ? ? ? , Pn }, where ? ? 1.
Proof. Let {Q1 , ? ? ? , Qk } be the k rigid structures yielded in an optimal solution of the k-prototype
learning,
P and {C1 , ? ? ? , Ck } be the corresponding k optimal clusters. For each 1 ? j ? k, the cost of
Cj is Pi ?Cj A(Pi , Qj ). There exists one rigid structure Pij ? Cj such that
A(Pij , Qj ) ?
1 X
A(Pi , Qj ).
|Cj | P ?C
i
(6)
j
If we replace Qj by Pij , the cost of Cj becomes
X
Pi ?Cj
A(Pi , Pij ) ?
X
(A(Pi , Qj ) + A(Qj , Pij )) ? 2
Pi ?Cj
X
A(Pi , Qj ),
(7)
Pi ?Cj
where the first inequality follows from the triangle inequality (by Lemma 2), and the second inequality follows from (6). Then, (7) directly implies that
k
X
X
j=1 Pi ?Cj
A(Pi , Pij ) ? 2
k
X
X
j=1 Pi ?Cj
A(Pi , Qj ),
(8)
(8) is similar to the deterministic solution in Theorem 2; the only difference is that the point-sets
here need to be aligned through rigid transformation, while in Theorem 2, the point-sets are fixed.
Now, consider the correlation graph ? . If we select {vi1 , ? ? ? , vik } as the k medians, the objective
value of the k-median clustering is the same as the left-hand side of (8). Let {vi01 , ? ? ? , vi0k } be the k
median vertices of the ?-approximation solution on ? . Then, we have
n
X
i=1
min A(Pi , Pi0j ) ? ?
1?j?k
n
X
i=1
min A(Pi , Pij ) ? 2?
1?j?k
k
X
X
A(Pi , Qj ),
(9)
j=1 Pi ?Cj
where the second inequality follows from (8). Thus the theorem is true.
Based on Theorem 3, we have the following algorithm for k-prototype learning.
t
u
Algorithm: k-prototype learning
1. Build the correlation graph ? , and run the algorithm proposed in [9] to obtain a
6 32 -approximation for the metric k-median clustering on ? , and consequently a 13 31 approximation for k-prototype learning.
2. For each obtained cluster, run the 1-prototype learning algorithm presented in Section 3.
Remark 2. Note that there are several algorithms for metric k-median clustering with better approximation ratio (than 6 23 ), such as the ones in [19]. But they are all theoretical algorithms and have
difficult to be applied in practice. We choose the linear programming rounding based algorithm by
Charikar et al. [9] partially due to its simplicity to be implemented for practical purpose.
The exact correlation graph is not available. From the methods presented in Section 3.1, we
know that only approximate alignments can be obtained. This means that the exact correlation graph
? is not available. As a consequence, the approximate correlation graph may not be metric (due
to possible violation of the triangle inequality). This seems to cause the above algorithm to yield
solution with no quality guarantee. Fortunately, as pointed in [9], the LP-rounding method still
yields a provably good approximation solution, as long as a weaker version of the triangle inequality
is satisfied (i.e., for any three vertices va , vb and vc in ? , their edge weights satisfy the inequality
w(va vb ) ? ?(w(va vc ) + w(vb vc )) for some constant ? > 1, where w(va vb ) is the weight of the
edge connecting va and vb ).
Theorem 4. For a given set of rigid structures, if a (1 + )-approximation of the alignment between
any pair of rigid structures can be computed, then the algorithm for metric k-median clustering
in [9] yields a 2( 23
3 (1 + ) ? 1)(1 + )-approximation for the k-prototype learning problem.
What if the rigid structures have unequal sizes? In some scenario, the rigid structures may not
have the same number of points, and consequently the one-to-one match between rigid structures in
Definition 2 is not available. To resolve this issue, we can use the weight normalization strategy and
adopt Earth Mover?s Distance (EMD) [8]. Generally speaking, for any rigid structure Pi containing
m
m0 points for some m0 6= m, we assign each point with a weight equal to m
0 , and compute the
alignment cost based on EMD, rather than the bipartite matching cost. With this modification, both
the 1- and k-prototype learning algorithms still work.
5
Exepriments
To evaluate the performance of our proposed approach, we implement our algorithms on a Linux
workstation (with 2.4GHz CPU and 4GB memory). We consider two types of data, the sets of
randomly generated 3D rigid structures and a real biological data set which is used to determine the
organization pattern (among a population of cells) of chromosome territories inside the cell nucleus.
Random data. For random data, we test a number of data sets with different size. For each data
set, we first randomly generate k different rigid structures, {Q1 , ? ? ? , Qk }. Then around each point
of Qj , j = 1, ? ? ? , k, we generate a set of points following Gaussian distribution, with variance
?. We randomly select one point from each of the m Gaussian distributions (around the m points
of Qj ) to form an m-rigid structure, and transform it by a random rigid transformation. Thus, we
build a cluster (denoted by Cj ) of m-rigid structures around each Qj , and Qj can be viewed as its
Sk
prototype (i.e., the ground truth). j=1 Cj forms an instance of the k-prototype learning problem.
We run the algorithm of k-prototype learning in Section 4, and denote the resulting k rigid structures
by {Q01 , ? ? ? , Q0k }. To evaluate the performance, we compute the following two values. Firstly, we
compute the bipartite matching cost, t1 , between {Q1 , ? ? ? , Qk } and {Q01 , ? ? ? , Q0k }, i.e., build the
bipartite graph between {Q1 , ? ? ? , Qk } and {Q01 , ? ? ? , Q0k }, and for each pair Qi and Q0j , connect
an edge with a weight equal to the alignment cost A(Qi , Q0j ). Secondly, we compute the average
alignment cost (denoted by cj ) between the rigid structures in Cj and Qj for 1 ? j ? k, and compute
Pk
the sum t2 = j=1 cj . Finally, we use the ratio t1 /t2 to show the performance. The ratio indicates
how much cost (i.e., t1 ) has been reduced by our prototype learning algorithm, comparing to the
cost (i.e., t2 ) of the input rigid structures. We choose k = 1, 2, 3, 4, 5; for each k, vary m from 10
to 20, and the size of each Cj from 100 to 300. Also, for each Cj , we vary the Gaussian variance
from 10% to 30% of the average
Pm spread norm of Qj , where if we assume Qj contains
Pm m points
1
1
{q1 , ? ? ? , qm }, and o = m
q
,
then
the
average
spread
norm
is
defined
as
l
l=1
l=1 ||ql ? o||.
m
For each k, we generate 10 datasets, and plot the average experimental results in Figure 2(a). The
experiment suggests that our generated prototypes are much closer (at least 40% for each k) to the
ground truth than the input rigid structures.
Average alignment cost
1
t1/t2
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
k
(a)
0.09
0.085
0.08
0.075
0.07
0.065
0.06
0
(b)
1
2
k
3
4
5
(c)
Fig. 2: (a) Experimental results for random data; (b)A 2D slice of the 3D microscopic image of 8
pairs of chromosome territories; (c ) Average alignment cost for biological data set.
Biological data. For real data, we use a biological data set consisting of 91 microscopic nucleus
images of WI-38 lung fibroblasts cells. Each image includes 8 pairs of chromosome territories (see
Fig. 2(b)). The objective of this experiment is to determine whether there exists any spatial pattern
among the population of cells governing the organization of the chromosomes inside the 3D cell
nucleus so as to provide new evidence to resolve a longstanding conjecture in cell biology which says
that each chromosome territory has a preferable position inside the cell nucleus. For this purpose,
we calculate the gravity center of each chromosome territory and use it as the representative of
the chromosome. In this way, each cell is converted into a rigid structure of 16 points. Since there
is no ground truth for the biological data, we directly use the average alignment cost between our
generated solutions and the input rigid structures to evaluate the performance. We run our algorithms
for k = 1, 2, 3, 4, and plot the cost in Fig. 2(c ). Our preliminary experiments indicate that there is a
significant reduction on the average cost from k = 1 to k = 2, and the cost does not change too much
for k = 2, 3, 4. We also analyze how chromosomes change their clusters when increase k from 2 to
4. We denote the clusters for k = 2 as {C12 , C22 }, and the clusters for k = 4 as {C14 , C24 , C34 , C44 }.
For each 1 ? j ? 4, we use
|Cj4 ?C12 |
|C12 |
and
|Cj4 ?C22 |
|C22 |
to represent the preservation of Cj4 from C12 and
C22 respectively. The following table 1 shows the preservation (denoted by Pre) with C12 and C22 . It
shows that C44 preserved C22 well, meanwhile, the union of {C14 , C24 , C34 } preserved C12 well. This
seems to suggest that all the cells are aggregated around two clusters.
Table 1: The preservations
Pre
C14
C24
C34
C44
C12 26.53% 18.37% 46.94% 8.16%
C22
6
0%
0%
5.56% 94.44%
Conclusion
In this paper, we study a new prototype learning problem, called k-prototype learning, for 3D rigid
structures, and present a practical optimization model for it. As the base case, we consider the 1prototype learning problem, and reduce it to the chromatic clustering problem. Then we extend
1-prototype learning algorithm to k-prototype learning to achieve a quality guaranteed approximate
solution. Finally, we implement our algorithms on both random and biological data sets. Experiments suggest that our algorithms can effectively learn prototypes from both types of data.
References
[1] H. Alt and L. Guibas, Discrete geometric shapes: matching, interpolation, and approximation, in: J.-R. Sack,
J. Urrutia (Eds.), Handbook of Computational Geometry, Elsevier, Amsterdam, 1999, pp. 121-153.
[2] K. S. Arun, T. S. Huang and S. D. Blostein: Least-Squares Fitting of Two 3-D Point Sets. IEEE Trans.
Pattern Anal. Mach. Intell. (PAMI) 9(5):698-700, 1987.
[3] R. Berezney. Regulating the mammalian genome: the role of nuclear architecture. Advances in Enzyme
Regulation, 42:39-52, 2002.
[4] P.J. Besl and N.D. McKay, A method for registration of 3-d shapes, IEEE Trans. Pattern Anal. Mach. Intell.
14 (2) 239-256, 1992.
[5] S. Belongie, J. Malik and J. Puzicha. Shape Matching and Object Recognition Using Shape Contexts. IEEE
Trans. Pattern Anal. Mach. Intell. 24(4): 509-522 (2002)
[6] J. Croft, J. Bridger, S. Boyle, P. Perry, P. Teague and W. Bickmore. Differences in the localization and
morphology of chromosomes in the human nucleus. J. Cell. Biol., 145(6):1119-1131, 1999.
[7] T. Cremer, M. Cremer, S. Dietzel, S. Mller, I. Solovei, and S. Fakan. Chromosome territoriesa functional
nuclear landscape. Curr. Opin. Cell. Biol., 18(3):307-316, 2006.
[8] S. D. Cohen and L. J. Guibas. The Earth Mover?s Distance under Transformation Sets. In ICCV, 1076-1083,
1999.
[9] M. Charikar, S. Guha, E. Tardos and D. B. Shmoys. A constant-factor approximation algorithm for the
k-median problem (extended abstract). In Proceedings of the thirtieth annual ACM symposium on Theory of
computing, STOC ?99, pages 1-10, New York, NY, USA, 1999.
[10] H. Ding, B. Stojkovic, R. Berezney and J. Xu. Gauging Association Patterns of Chromosome Territories
via Chromatic Median. In CVPR 2013: 1296-1303
[11] M. F. Demirci, A. Shokoufandeh and S. J. Dickinson. Skeletal Shape Abstraction from Examples. IEEE
Trans. Pattern Anal. Mach. Intell. 31(5): 944-952, 2009.
[12] H. Ding and J. Xu. Solving the Chromatic Cone Clustering Problem via Minimum Spanning Sphere. In
ICALP (1) 2011: 773-784
[13] H. Ding and J. Xu. FPTAS for Minimizing Earth Mover?s Distance under Rigid Transformations. In ESA
2013: 397-408
[14] M. Ferrer, D. Karatzas, E. Valveny, I. Bardaj and H. Bunke. A generic framework for median graph
computation based on a recursive embedding approach. Computer Vision and Image Understanding 115(7):
919-928, 2011.
[15] J. C. Gower. Generalized procrustes analysis. Psychometrika 40: 3331, 1975.
[16] R. I. Hartley and F. Kahl. Global Optimization through Rotation Space Search. International Journal of
Computer Vision 82(1): 64-79 (2009)
[17] T. Jiang, F. Jurie and C. Schmid. Learning shape prior models for object matching. In CVPR 2009: 848-855
[18] X. Jiang, A. Munger and H. Bunke. On Median Graphs: Properties, Algorithms, and Applications, IEEE
TPAMI, vol. 23, no. 10, pp. 1144-1151, Oct. 2001.
[19] S. Li and O. Svensson: Approximating k-median via pseudo-approximation. In STOC 2013: 901-910.
[20] D. Macrini, K. Siddiqi and S. J. Dickinson. From skeletons to bone graphs: Medial abstraction for object
recognition. In CVPR, 2008.
[21] T. B. Sebastian, P. N. Klein and B. B. Kimia. Recognition of Shapes by Editing Shock Graphs. In ICCV,
755-762, 2001.
[22] N. H. Trinh and B. B. Kimia. Learning Prototypical Shapes for Object Categories. In SMiCV, 2010.
[23] E. Weiszfeld. On the point for which the sum of the distances to n given points is minimum. Tohoku. Math.
Journal., 43:355-386, 1937.
[24] D. B. West, Introduction to Graph Theory , Prentice Hall, Chapter 3, ISBN 0-13-014400-2, 1999.
[25] M.J. Zeitz, L. Mukherjee, J. Xu, S. Bhattacharya and R. Berezney. A Probabilistic Model for the Arrangement of a Subset of Chromosome Territories in WI38 Human Fibroblasts. Journal of Cellular Physiology, 221,
120-129, 2009.
| 5095 |@word version:2 seems:2 norm:2 vi1:1 hu:1 q1:13 recursively:2 reduction:2 initial:2 configuration:2 contains:4 selecting:1 genetic:1 kahl:1 existing:5 current:2 comparing:2 si:3 must:1 ronald:1 partition:12 shape:10 opin:1 plot:2 update:2 medial:1 selected:2 item:1 realizing:2 core:2 math:1 node:1 firstly:2 c22:7 fibroblast:2 m2d:1 symposium:1 consists:1 fitting:1 inside:3 introduce:3 pairwise:2 expected:1 roughly:1 p1:7 multi:2 morphology:1 m8:1 karatzas:1 resolve:2 cpu:1 considering:2 becomes:8 psychometrika:1 estimating:1 matched:3 notation:1 xx:1 what:1 minimizes:1 q2:1 finding:4 transformation:16 guarantee:4 pseudo:1 every:2 gravity:1 exactly:3 um:5 qm:6 preferable:1 control:1 grant:1 positive:1 t1:4 engineering:2 local:3 limit:1 consequence:1 mach:4 jiang:3 meet:1 interpolation:1 approximately:1 abuse:1 pami:1 twice:1 suggests:1 challenging:2 co:6 range:1 jurie:1 practical:6 unique:1 recursive:3 practice:2 implement:3 union:1 procedure:2 area:1 c24:3 significantly:1 physiology:1 matching:17 pre:3 suggest:4 cannot:4 put:1 context:1 influence:1 applying:1 prentice:1 equivalent:1 map:1 deterministic:4 center:1 convex:2 focused:2 formulate:1 d12:1 simplicity:1 identifying:1 boyle:1 insight:1 nuclear:2 population:3 searching:4 embedding:1 updated:1 tardos:1 exact:2 programming:1 dickinson:2 recognition:4 mammalian:1 mukherjee:1 labeled:1 observed:3 role:1 ding:5 capture:1 calculate:1 region:6 decrease:1 mentioned:1 complexity:2 skeleton:1 solving:2 localization:1 bipartite:12 triangle:6 easily:1 represented:1 chapter:1 query:1 tell:1 whose:2 quite:2 heuristic:1 solve:2 larger:1 say:1 cvpr:3 besl:1 gi:7 g1:4 transform:1 final:2 obviously:1 jinhui:2 tpami:1 net:7 isbn:1 reconstruction:5 aligned:1 translate:1 achieve:1 c34:3 validate:2 cluster:14 requirement:2 valveny:1 generating:1 object:8 nearest:1 implemented:1 hungarian:2 implies:2 indicate:1 closely:1 hartley:1 vc:3 human:2 st1:1 assign:1 fix:1 preliminary:2 biological:10 secondly:1 practically:2 around:5 hall:1 ground:3 guibas:2 great:1 mller:1 m0:2 vary:2 consecutive:1 a2:3 smallest:1 adopt:1 purpose:3 earth:3 estimation:1 arun:2 clearly:4 gaussian:3 always:1 rather:3 ck:1 avoid:1 pn:4 bunke:2 chromatic:25 thirtieth:1 q3:1 improvement:3 likelihood:1 mainly:1 indicates:1 elsevier:1 abstraction:4 rigid:72 unlikely:1 fptas:1 hidden:1 provably:1 issue:3 among:2 denoted:6 spatial:3 equal:4 once:2 construct:3 having:1 emd:2 biology:3 represents:1 gauging:1 minimized:2 t2:4 randomly:7 tightly:1 mover:3 individual:1 intell:4 replaced:1 geometry:1 consisting:4 curr:1 organization:4 huge:1 regulating:1 mining:1 certainly:1 alignment:30 violation:1 gpa:2 accurate:1 edge:8 closer:1 necessary:1 tree:1 conduct:1 euclidean:6 initialized:1 desired:2 theoretical:4 instance:2 classify:1 gn:4 cost:34 vertex:5 subset:1 euler:2 mckay:1 kimia:2 rounding:2 guha:1 c14:3 too:1 connect:1 st:2 international:1 randomized:3 probabilistic:1 connecting:2 linux:1 squared:2 satisfied:1 containing:2 choose:3 huang:1 inefficient:1 li:1 converted:1 c12:7 includes:1 satisfy:1 caused:1 explicitly:1 vi:2 later:1 view:1 bone:1 analyze:1 red:1 start:1 lung:1 contribution:3 minimize:6 square:1 qk:5 variance:2 efficiently:1 yield:8 landscape:1 generalize:2 territory:14 shmoys:1 none:1 history:1 suffers:1 sebastian:1 ed:1 definition:20 pp:2 associated:2 proof:3 workstation:1 dataset:1 popular:1 color:1 subsection:2 cj:18 editing:1 governing:1 correlation:12 until:3 hand:2 replacing:1 perry:1 quality:6 building:2 usa:1 concept:1 true:2 iteratively:2 deal:2 cj4:3 sin:6 translocation:1 generalized:2 complete:1 q01:3 reflection:2 image:7 common:2 rotation:6 functional:1 overview:1 cohen:1 extend:1 slight:1 association:1 ferrer:2 significant:1 rd:1 grid:3 pm:6 pointed:1 sack:1 stable:3 similarity:2 etc:1 base:4 align:1 aligning:1 enzyme:1 closest:1 own:1 recent:2 mint:1 scenario:1 inequality:11 continue:1 captured:1 minimum:7 fortunately:1 determine:8 maximize:1 aggregated:1 ii:1 branch:1 multiple:2 full:1 preservation:3 reduces:1 match:7 long:2 sphere:1 molecular:2 munger:1 a1:3 va:5 qi:2 variant:1 vision:3 metric:11 iteration:2 represent:2 normalization:1 cell:15 c1:1 preserved:2 median:27 grow:1 induced:1 integer:1 structural:1 enough:1 easy:1 architecture:1 identified:1 imperfect:1 idea:9 prototype:73 reduce:6 tradeoff:1 vik:1 enumerating:2 qj:31 tohoku:1 whether:1 motivated:1 gb:1 algebraic:1 york:4 cause:1 speaking:1 repeatedly:1 remark:2 generally:1 procrustes:2 concentrated:1 siddiqi:1 category:1 reduced:1 generate:5 exist:2 nsf:1 disjoint:1 klein:1 discrete:1 skeletal:1 vol:1 key:4 pj:3 registration:1 shock:1 v1:1 graph:28 relaxation:1 sum:3 cone:1 sti:1 angle:4 run:4 throughout:1 vn:1 vb:5 bound:2 guaranteed:1 correspondence:1 quadratic:1 yielded:1 annual:1 u1:5 answered:1 min:3 conjecture:1 department:3 charikar:2 combination:1 remain:1 smaller:4 reconstructing:1 wi:1 lp:1 modification:1 s1:8 invariant:1 iccv:2 remains:2 discus:2 r3:1 turn:1 know:6 shokoufandeh:1 vi0k:1 available:3 promoting:1 generic:1 save:1 bhattacharya:1 original:1 top:1 clustering:29 include:1 running:3 denotes:1 plj:5 gower:1 build:9 especially:1 uj:9 approximating:1 objective:21 malik:1 arrangement:1 occurs:1 strategy:3 dependence:2 microscopic:2 distance:11 cellular:1 spanning:1 index:2 rotational:1 ratio:7 minimizing:1 difficult:1 ql:1 regulation:1 stoc:2 anal:4 proper:2 perform:3 upper:1 snapshot:1 datasets:3 markov:1 buffalo:9 extended:1 esa:1 introduced:3 pair:6 required:1 unequal:2 c44:3 trans:4 suggested:1 below:1 pattern:12 challenge:1 including:1 memory:1 critical:1 difficulty:1 recursion:1 improve:1 numerous:1 coupled:1 schmid:1 sn:1 nice:1 understanding:2 geometric:6 stn:1 prior:1 determining:1 icalp:1 log6:1 prototypical:1 limitation:1 nucleus:5 pij:15 gi0:3 pi:34 share:1 translation:1 cancer:1 supported:1 gl:3 formal:1 weaker:2 side:2 wide:1 neighbor:1 sparse:2 ghz:1 slice:1 rich:1 genome:1 commonly:2 coincide:1 longstanding:1 bb:1 approximate:5 ignore:1 implicitly:1 global:1 handbook:1 belongie:1 tuples:1 search:1 iterative:2 svensson:1 sk:1 why:1 table:2 learn:2 chromosome:21 nature:1 obtaining:1 complex:1 meanwhile:1 domain:5 vj:1 pk:1 main:7 dense:2 spread:2 teague:1 s2:6 q0j:2 xu:6 fig:4 representative:3 west:1 ny:1 sub:2 position:3 croft:1 theorem:11 embed:1 alt:1 a3:3 evidence:1 exists:3 effectively:3 mirror:1 dissimilarity:1 likely:1 amsterdam:1 partially:2 corresponds:1 truth:3 satisfies:1 acm:1 oct:1 viewed:5 formulated:1 consequently:3 replace:1 change:4 demirci:1 uniformly:1 lemma:7 called:8 total:7 experimental:2 select:4 puzicha:1 bioinformatics:1 evaluate:5 biol:2 |
4,526 | 5,096 | Distributed k-Means and k-Median Clustering on
General Topologies
Maria Florina Balcan, Steven Ehrlich, Yingyu Liang
School of Computer Science
Georgia Institute of Technology
Atlanta, GA 30332
{ninamf,sehrlich}@cc.gatech.edu,[email protected]
Abstract
This paper provides new algorithms for distributed clustering for two popular
center-based objectives, k-median and k-means. These algorithms have provable
guarantees and improve communication complexity over existing approaches.
Following a classic approach in clustering by [13], we reduce the problem of
finding a clustering with low cost to the problem of finding a coreset of small
size. We provide a distributed method for constructing a global coreset which
improves over the previous methods by reducing the communication complexity,
and which works over general communication topologies. Experimental results
on large scale data sets show that this approach outperforms other coreset-based
distributed clustering algorithms.
1
Introduction
Most classic clustering algorithms are designed for the centralized setting, but in recent years data
has become distributed over different locations, such as distributed databases [21, 5], images and
videos over networks [20], surveillance [11] and sensor networks [4, 12]. In many of these applications the data is inherently distributed because, as in sensor networks, it is collected at different
sites. As a consequence it has become crucial to develop clustering algorithms which are effective
in the distributed setting.
Several algorithms for distributed clustering have been proposed and empirically tested. Some of
these algorithms [10, 22, 7] are direct adaptations of centralized algorithms which rely on statistics
that are easy to compute in a distributed manner. Other algorithms [14, 17] generate summaries of
local data and transmit them to a central coordinator which then performs the clustering algorithm.
No theoretical guarantees are provided for the clustering quality in these algorithms, and they do
not try to minimize the communication cost. Additionally, most of these algorithms assume that
the distributed nodes can communicate with all other sites or that there is a central coordinator that
communicates with all other sites.
In this paper, we study the problem of distributed clustering where the data is distributed across
nodes whose communication is restricted to the edges of an arbitrary graph. We provide algorithms
with small communication cost and provable guarantees on the clustering quality. Our technique
for reducing communication in general graphs is based on the construction of a small set of points
which act as a proxy for the entire data set.
An -coreset is a weighted set of points whose cost on any set of centers is approximately the cost
of the original data on those same centers up to accuracy . Thus an approximate solution for the
coreset is also an approximate solution for the original data. Coresets have previously been studied
in the centralized setting ([13, 8]) but have also recently been used for distributed clustering as in
[23] and as implied by [9]. In this work, we propose a distributed algorithm for k-means and k1
1
1
C4
C2
1
C 356
2
2
4
3
4
2
3
3
6
6
4
C6
C5
5
5
6
(a) Zhang et al.[23]
5
(b) Our Construction
Figure 1: (a) Each node computes a coreset on the weighted pointset for its own data and its
subtrees? coresets. (b) Local constant approximation solutions are computed, and the costs of these
solutions are used to coordinate the construction of a local portion on each node.
median, by which each node constructs a local portion of a global coreset. Communicating the
approximate cost of a global solution to each node is enough for the local construction, leading to
low communication cost overall. The nodes then share the local portions of the coreset, which can
be done efficiently in general graphs using a message passing approach.
More precisely, in Section 3, we propose a distributed coreset construction algorithm based on local
approximate solutions. Each node computes an approximate solution for its local data, and then
constructs the local portion of a coreset using only its local data and the total cost of each node?s ap?
proximation. For constant, this builds a coreset of size O(kd+nk)
for k-median and k-means when
the data lies in d dimensions and is distributed over n sites. If there is a central coordinator among
the n sites, then clustering can be performed on the coordinator by collecting the local portions of
?
the coreset with a communication cost equal to the coreset size O(kd
+ nk). For distributed clustering over general connected topologies, we propose an algorithm based on the distributed coreset
construction and a message-passing approach, whose communication cost improves over previous
coreset-based algorithms. We provide a detailed comparison below.
Experimental results on large scale data sets show that our algorithm performs well in practice. For
a fixed amount of communication, our algorithm outperforms other coreset construction algorithms.
Comparison to Other Coreset Algorithms: Since coresets summarize local information they are
a natural tool to use when trying to reduce communication complexity. If each node constructs an coreset on its local data, then the union of these coresets is clearly an -coreset for the entire data set.
Unfortunately the size of the coreset in this approach increases greatly with the number of nodes.
Another approach is the one presented in [23]. Its main idea is to approximate the union of local
coresets with another coreset. They assume nodes communicate over a rooted tree, with each node
passing its coreset to its parent. Because the approximation factor of the constructed coreset depends
on the quality of its component coresets, the accuracy a coreset needs (and thus the overall communication complexity) scales with the height of this tree. Although it is possible to find a spanning
tree in any communication network, when the graph has large diameter every tree has
?large height.
In particular many natural networks such as grid networks have a large diameter (?( n) for grids)
which greatly increases the size of the local coresets. We show that it is possible to construct a global
coreset with low communication overhead. This is done by distributing the coreset construction procedure rather than combining local coresets. The communication needed to construct this coreset is
negligible ? just a single value from each data set representing the approximate cost of their local
optimal clustering. Since the sampled global -coreset is the same size as any local -coreset, this
leads to an improvement of the communication cost over the other approaches. See Figure 1 for an
illustration. The constructed coreset is smaller by a factor of n in general graphs, and is independent
of the communication topology. This method excels in sparse networks with large diameters, where
the previous approach in [23] requires coresets that are quadratic in the size of the diameter for
k-median and quartic for k-means; see Section 4 for details. [9] also merge coresets using coreset
construction, but they do so in a model of parallel computation and ignore communication costs.
Balcan et al. [3] and Daume et al. [6] consider communication complexity questions arising when
doing classification in distributed settings. In concurrent and independent work, Kannan and Vem2
pala [15] study several optimization problems in distributed settings, including k-means clustering
under an interesting separability assumption.
2
Preliminaries
Let d(p, q) denote the Euclidean distance between any two points p, q ? Rd . The goal of k-means
clustering is to find a set of k centers x = {x1 , x2 , . . . , xk } which minimize
the k-means cost of data
P
set P ? Rd . Here the k-means cost is defined as cost(P, x) = p?P d(p, x)2 where d(p, x) =
minx?x d(p, x).
P If P is a weighted data set with a weighting function w, then
P the k-means cost
is defined as p?P w(p)d(p, x)2 . Similarly, the k-median cost is defined as p?P d(p, x). Both
k-means and k-median cost functions are known to be NP-hard to minimize (see for example [2]).
For both objectives, there exist several readily available polynomial-time algorithms that achieve
constant approximation solutions (see for example [16, 18]).
In distributed clustering, we consider a set of n nodes V = {vi , 1 ? i ? n} which communicate
on an undirected connected graph G = (V, E) with m = |E| edges. More precisely, an edge
(vi , vj ) ? E indicates that vi and vj can communicate with each other. Here we measure the
communication cost in number of points transmitted, and assume for simplicity that there is no
latency in
Snthe communication. On each node vi , there is a local data set Pi , and the global data set
is P = i=1 Pi . The goal is to find a set of k centers x which optimize cost(P, x) while keeping
the computation efficient and the communication cost as low as possible. Our focus is to reduce the
communication cost while preserving theoretical guarantees for approximating clustering cost.
Coresets: For the distributed clustering task, a natural approach to avoid broadcasting raw data is
to generate a local summary of the relevant information. If each site computes a summary for their
own data set and then communicates this to a central coordinator, a solution can be computed from
a much smaller amount of data, drastically reducing the communication.
In the centralized setting, the idea of summarization with respect to the clustering task is captured
by the concept of coresets [13, 8]. A coreset is a set of weighted points whose cost approximates the
cost of the original data for any set of k centers. The formal definition of coresets is:
Definition 1 (coreset). An -coreset for a set of points P with respect to a center-based cost function
is a set of points S and
P a set of weights w : S ? R such that for any set of centers x, we have
(1 ? )cost(P, x) ? p?S w(p)cost(p, x) ? (1 + )cost(P, x).
In the centralized setting, many coreset construction algorithms have been proposed for k-median,
k-means and some other cost functions. For example, for points in Rd , algorithms in [8] construct
4
2
?
?
coresets of size O(kd/
) for k-means and coresets of size O(kd/
) for k-median. In the distributed setting, it is natural to ask whether there exists an algorithm that constructs a small coreset
for the entire point set but still has low communication cost. Note that the union of coresets for multiple data sets is a coreset for the union of the data sets. The immediate construction of combining
the local coresets from each node would produce a global coreset whose size was larger by a factor
of n, greatly increasing the communication complexity. We present a distributed algorithm which
constructs a global coreset the same size as the centralized construction and only needs a single
value1 communicated to each node. This serves as the basis for our distributed clustering algorithm.
3
Distributed Coreset Construction
Here we design a distributed coreset construction algorithm for k-means and k-median. The underlying technique can be extended to other additive clustering objectives such as k-line median.
To gain some intuition on the distributed coreset construction algorithm, we briefly review the construction algorithm in [8] in the centralized setting. The coreset is constructed by computing a
constant approximation solution for the entire data set, and then sampling points proportional to
their contributions to the cost of this solution. Intuitively, the points close to the nearest centers can
be approximately represented by the centers while points far away cannot be well represented. Thus,
points should be sampled with probability proportional to their contributions to the cost. Directly
adapting the algorithm to the distributed setting would require computing a constant approximation
1
The value that is communicated is the sum of the costs of approximations to the local optimal clustering.
This is guaranteed to be no more than a constant factor times larger than the optimal cost.
3
Algorithm 1 Communication aware distributed coreset construction
Input: Local datasets {Pi , 1 ? i ? n}, parameter t (number of points to be sampled).
Round 1: on each node vi ? V
? Compute a constant approximation Bi for Pi .
Communicate cost(Pi , Bi ) to all other nodes.
Round 2: on each node vi ? V
i ,Bi )
? Set ti = Pnt cost(P
and mp = cost(p, Bi ), ?p ? Pi .
j=1 cost(Pj ,Bj )
? Pick a non-uniform random sample Si of ti points from Pi ,
P
where for every
q ? Si and p ? Pi , we have q = p with probability mp / z?Pi mz .
P P
mz
z?Pi
Let wq = i tm
for each q ? Si .
q
P
? For ?b ? Bi , let Pb = {p ? Pi : d(p, b) = d(p, Bi )}, wb = |Pb | ? q?Pb ?S wq .
Output: Distributed coreset: points Si ? Bi with weights {wq : q ? Si ? Bi }, 1 ? i ? n.
solution for the entire data set. We show that a global coreset can be constructed in a distributed
fashion by estimating the weight of the entire data set with the sum of local approximations. With
this approach, it suffices for nodes to communicate the total costs of their local solutions.
Theorem 1. For distributed k-means and k-median clustering on a graph, there exists an algorithm
such S
that with probability at least 1 ? ?, the union of its output on all nodes is an -coreset for
n
1
P = i=1 Pi . The size of the coreset is O( 14 (kd+log 1? )+nk log nk
? ) for k-means, and O( 2 (kd+
1
log ? ) + nk) for k-median. The total communication cost is O(mn).
As described below, the distributed coreset construction can be achieved by using Algorithm 1 with
1
1
appropriate t, namely O( 14 (kd + log 1? ) + nk log nk
? ) for k-means and O( 2 (kd + log ? )) for kmedian. Due to space limitation, we describe a proof sketch highlighting the intuition and provide
the details in the supplementary material.
Proof Sketch of Theorem 1: The analysis relies on the definition of the pseudo-dimension of a
function space and a sampling lemma.
Definition 2 ([19, 8]). Let F be a finite set of functions from a set P to R?0 . For f ? F , let
B(f, r) = {p : f (p) ? r}. The dimension of the function space dim(F, P ) is the smallest integer d
such that for any G ? P , {G ? B(f, r) : f ? F, r ? 0} ? |G|d .
Suppose we draw a sample S according to {mp : p ? P }, namely forPeach q ? S and p ? P , q = p
m
z?P mz
with probability P p mz . Set the weights of the points as wp = m
for p ? P . Then for
p |S|
z?P
any
f
?
F
,
the
expectation
of
the
weighted
cost
of
S
equals
the
cost
of
the
original
data P , since
hP
i P
P
P
P
E
q?S wq f (q) =
q?S E[wq f (q)] =
q?S
p?P Pr[q = p]wp f (p) =
p?P f (p).
If the sample size is large enough, then we also have concentration for any f ? F . The lemma is
implicit in [8] and we include the proof in the supplementary material.
Lemma 1. Fix a set F of functions f : P ? R?0 . Let S be a sample drawn i.i.d. from P according
m
to {mp ? R?0 : p ? P }: for each q ? S and p ? P , q = p with probability P p mz . Let wp =
z?P
P
z?P mz
large c, if |S| ? c2 dim(F, P ) + log 1? , then with probabilmp |S| for p ? P . For a sufficiently
P
P
P
f (p)
ity at least 1 ? ?, ?f ? F : p?P f (p) ? q?S wq f (q) ?
.
m
max
p
p?P mp
p?P
P
P
To get a small bound on the difference between p?P f (p) and q?S wq f (q), we need to choose
P
mp such that p?P mp is small and maxp?P fm(p)
is bounded. More precisely, if we choose mp =
p P
maxf ?F f (p), then the difference is bounded by p?P mp .
We first consider the centralized setting and review how [8] applied the lemma to construct a coreset for k-median as in Definition 1. A natural approach is to apply this lemma directly to the
cost fx (p) := cost(p, x). The problem is that a suitable upper bound mp is not available for
cost(p, x). However, we can still apply the lemma to a different set of functions defined as follows. Let bp denote the closest center to p in the approximation solution. Aiming to approximate
4
P
P
the error p [cost(p, x) ? cost(bp , x)] rather than to approximate p cost(p, x) directly, we define
fx (p) := cost(p, x) ? cost(bp , x) + cost(p, bp ), where cost(p, bp ) is added so that fx (p) ? 0. Since
0 ? fxP
(p) ? 2cost(p,P
bp ), we can apply theP
lemma with mp = 2cost(p, bp ). It bounds the difference | p?P fx (p) ? q?S wq fx (q)| by 2 p?P cost(p, bp ), so we have an O()-approximation.
P
P
P
P
Note that p?P fx (p) ? q?S wq fx (q) does not equal p?P cost(p, x) ? q?S wq cost(q, x).
P
However, it equals the difference between p?P cost(p, x) and a weighted cost of the sampled
points and the centers in the approximation solution. To get a coreset as in Definition 1, we need to
add the centers of the approximation solution with specific weights to the coreset. Then when the
sample is sufficiently large, the union of the sampled points and the centers is an -coreset.
Our key contribution in this paper is to show that in the distributed setting, it suffices to choose
bp from the local approximation solution for the local dataset containing p, rather than from an
approximation solution for the global dataset. Furthermore, the sampling and the weighting of the
coreset points can be done in a local manner. In the following, we provide a formal verification
of our discussion above. We have the following lemma for k-median with F = {fx : fx (p) =
d(p, x) ? d(bp , x) + d(p, bp ), x ? (Rd )k }.
Lemma 2. For k-median, the output of Algorithm 1 is an -coreset with probability at least 1 ? ?,
if t ? c2 dim(F, P ) + log 1? for a sufficiently large constant c.
Proof Sketch of Lemma 2: We want to show that for any set of centers x the true cost for using
these centers is well approximated by the cost on the weighted coreset. Note
P that our coreset has two
n
z?P mz
types of points: sampled points q ? S = ?i=1 Si with weight wq := m
and local solution
q |S|
P
centers b ? B = ?ni=1 Bi with weight wb := |Pb | ? q?S?Pb wq . We use bp to represent the nearest
center to p in the local approximation solution. We use Pb to represent the set of points which have
b as their closest center in the local approximation solution.
As mentioned above, we construct fx (p) to be the difference between the cost of p and
the cost
1 can be applied.
Note that the centers are
P of bp so that Lemma
P
P
P
P weighted such
that
w
d(b,
x)
=
|P
|d(b,
x)
?
w
d(b,
x)
=
b
b
q
b?B
b?B
b?B P q?S?Pb
p?P d(bp , x) ?
P
P
w
d(b
,
x).
Taken
together
with
the
fact
that
m
=
w
m
,
we can show
q
q
p
q
q
q?S
p?P
q?S
P
P
P
P
that p?P d(p, x) ? q?S?B wq d(q, x) = p?P fx (p) ? q?S wq fx (q). Note that 0 ?
fx (p) ? 2d(p, bp ) by triangle inequality, and S is sufficiently large and chosen according to
mp = d(p, bp ), so the conditions
of Lemma 1 are met. Thus we can conclude that
weights
P
P
P
p?P d(p, x) ? q?S?B wq d(q, x) ? O() p?P d(p, x), as desired.
In [8] it is shown
that dim(F, P ) = O(kd). Therefore, by Lemma 2, when |S| ?
O 12 (kd + log 1? ) , the weighted cost of S ? B approximates the k-median cost of P for any set
of centers, then (S ? B, w) becomes an -coreset for P . The total communication cost is bounded
by O(mn), since even in the most general case that every node only knows its neighbors, we can
broadcast the local costs with O(mn) communication (see Algorithm 3).
Proof Sketch for k-means: Similar methods prove that for k-means when t = O( 14 (kd + log 1? ) +
nk log nk
? )), the algorithm constructs an -coreset with probability at least 1 ? ?. The key difference
is that triangle inequality does not apply directly to the k-means cost, and so the error |cost(p, x) ?
cost(bp , x)| and thus fx (p) are not bounded. The main change to the analysis is that we divide the
points into two categories: good points whose costs approximately satisfy the triangle inequality
(up to a factor of 1/) and bad points. The good points for a fixed set of centers x are defined as
cost(p,bp )
G(x) = {p ? P : |cost(p, x) ? cost(bp , x)| ? ?p } where the upper bound is ?p =
, and
the analysis follows as in Lemma 2. For bad points we can show that the difference in cost must still
be small, namely O( min{cost(p, x), cost(bp , x)}).
More formally, let P
fx (p) = cost(p, x) ?
Pcost(bp , x) + ?p , and let gx (p) be fx (p) if p ? G(x) and
0 otherwise. Then p?P cost(p, x) ? q?S?B wq cost(q, x) is decomposed into three terms:
X
X
X
X
gx (p) ?
wq gx (q) +
fx (p) ?
wq fx (q)
p?P
|
q?S
{z
(A)
p?P \G(x)
}
|
{z
(B)
5
q?S\G(x)
}
|
{z
(C)
}
Algorithm 2 Distributed clustering on a graph
Input: {Pi , 1 ? i ? n}: local datasets; {Ni , 1 ? i ? n}: the neighbors of vi ; A? : an ?approximation algorithm for weighted clustering instances.
Round 1: on each node vi
? Construct its local portion Di of an /2-coreset by Algorithm 1,
using Message-Passing for communicating the local costs.
Round 2: on each node vi
S
? Call Message-Passing(Di , Ni ). Compute x = A? ( j Dj ).
Output: x
Algorithm 3 Message-Passing(Ii , Ni )
Input: Ii is the message, Ni are the neighbors.
? Let Ri denote the information received. Initialize Ri = {Ii }, and send Ii to Ni .
? While Ri 6= {Ij , 1 ? j ? n}:
If receive message Ij 6? Ri , then let Ri = Ri ? {Ij } and send Ij to Ni .
Lemma 1 bounds (A) by O()cost(P, x), but we need an accuracy of 2 to compensate for the 1/
factor in the upper bound of fx (p). This leads to an O(1/4 ) factor in the sample complexity.
For (B) and (C), |cost(p, x) ? cost(bp , x)| > ?p since p 6? G(x). This can be used to show that
p and bp are close to each other and far away from x, and thus |cost(p, x) ? cost(bp , x)| is O()
smaller than cost(p, x) and cost(bp , x). This fact bounds ((B)) by O()cost(P, x). It also bounds
P
P
(C), noting that E[ q?Pb ?S wq ] = |Pb |, and thus q?Pb ?S wq ? 2|Pb | when t ? O(nk log nk
? ).
The proof is completed by bounding the function space dimension by O(kd) as in [8].
4
Effect of Network Topology on Communication Cost
If there is a central coordinator in the communication graph, then we can run distributed coreset construction algorithm and send the local portions of the coreset to the coordinator, which can perform
the clustering task. The total communication cost is just the size of the coreset.
In this section, we consider distributed clustering over arbitrary connected topologies. We propose
to use a message passing approach for collecting information for coreset construction and sharing
the local portions of the coreset. The details are presented in Algorithm 2 and 3. Since each piece
of the coreset is shared at most twice across any particular edge in message passing, we have
Theorem 2. Given an ?-approximation algorithm for weighted k-means (k-median respectively)
as a subroutine, there exists an algorithm that with probability at least 1 ? ? outputs a (1 + )?approximation solution for distributed k-means (k-median respectively). The communication cost is
1
1
O(m( 14 (kd + log 1? ) + nk log nk
? )) for k-means, and O(m( 2 (kd + log ? ) + nk)) for k-median.
In contrast, an approach where each node constructs an -coreset for k-means and sends it to the
? mnkd
other nodes incurs communication cost of O(
4 ). Our algorithm significantly reduces this.
Our algorithm can also be applied on a rooted tree: we can send the coreset portions to the root
which then applies an approximation algorithm. Since each portion are transmitted at most h times,
Theorem 3. Given an ?-approximation algorithm for weighted k-means (k-median respectively)
as a subroutine, there exists an algorithm that with probability at least 1 ? ? outputs a (1 + )?approximation solution for distributed k-means (k-median respectively) clustering on a rooted tree
of height h. The total communication cost is O(h( 14 (kd + log 1? ) + nk log nk
? )) for k-means, and
O(h( 12 (kd + log 1? ) + nk)) for k-median.
? nh44kd ) for k-means and the cost of O(
? nh22kd ) for k-median
Our approach improves the cost of O(
in [23] 2 . The algorithm in [23] builds on each node a coreset for the union of coresets from its
2
Their algorithm used coreset construction as a subroutine. The construction algorithm they used builds
? nkh
coreset of size O(
log |P |). Throughout this paper, when we compare to [23] we assume they use the
d
coreset construction technique of [8] to reduce their coreset size and communication cost.
6
children, and thus needs O(/h) accuracy to prevent the accumulation of errors. Since the coreset
construction subroutine has quadratic dependence on 1/ for k-median (quartic for k-means), the
algorithm then has quadratic dependence on h (quartic for k-means). Our algorithm does not build
coreset on top of coresets, resulting in a better dependence on the height of the tree h.
In a general graph, any rooted tree will have its?height h at least as large as half the diameter. For
sensors in a grid network, this implies h = ?( n). In this case, our algorithm gains a significant
improvement over existing algorithms.
5
Experiments
Here we evaluate the effectiveness of our algorithm and compare it to other distributed coreset algorithms. We present the k-means cost of the solution by our algorithm with varying communication
cost, and compare to those of other algorithms when they use the same amount of communication.
Data sets: We present results on YearPredictionMSD (515345 points in R90 , k = 50). Similar
results are observed on five other datasets, which are presented in the supplementary material.
Experimental Methodology: We first generate a communication graph connecting local sites, and
then partition the data into local data sets. The algorithms were evaluated on Erd?os-Renyi random
graphs with p = 0.3, grid graphs, and graphs generated by the preferential attachment mechanism [1]. We used 100 sites for YearPredictionMSD.
The data is then distributed over the local sites. There are four partition methods: uniform,
similarity-based, weighted, and degree-based. In all methods, each example is distributed to the
local sites with probability proportional to the site?s weight. In uniform partition, the sites have
equal weights; in similarity-based partition, each site has an associated data point randomly selected
from the global data and the weight is the similarity to the associated point; in weighted partition,
the weights are chosen from |N (0, 1)|; in degree-based, the weights are the sites? degrees.
To measure the quality of the coreset generated, we run Lloyd?s algorithm on the coreset and the
global data respectively to get two solutions, and compute the ratio between the costs of the two
solutions over the global data. The average ratio over 30 runs is then reported. We compare our
algorithm with COMBINE, the method of combining coresets from local data sets, and with the
algorithm of [23] (Zhang et al.). When running the algorithm of Zhang et al., we restrict the network
to a spanning tree by picking a root uniformly at random and performing a breadth first search.
Results: Figure 2 shows the results over different network topologies and partition methods. We
observe that the algorithms perform well with much smaller coreset sizes than predicted by the
theoretical bounds. For example, to get 1.1 cost ratio, the coreset size and thus the communication
needed is only 0.1% ? 1% of the theoretical bound.
In the uniform partition, our algorithm performs nearly the same as COMBINE. This is not surprising since our algorithm reduces to the COMBINE algorithm when each local site has the same cost
and the two algorithms use the same amount of communication. In this case, since in our algorithm
the sizes of the local samples are proportional to the costs of the local solutions, it samples the same
number of points from each local data set. This is equivalent to the COMBINE algorithm with the
same amount of communication. In the similarity-based partition, similar results are observed as it
also leads to balanced local costs. However, when the local sites have significantly different costs (as
in the weighted and degree-based partitions), our algorithm outperforms COMBINE. As observed
in Figure 2, the costs of our solutions consistently improve over those of COMBINE by 2% ? 5%.
Our algorithm then saves 10% ? 20% communication cost to achieve the same approximation ratio.
Figure 3 shows the results over the spanning trees of the graphs. Our algorithm performs much better
than the algorithm of Zhang et al., achieving about 20% improvement in cost. This is due to the fact
that their algorithm needs larger coresets to prevent the accumulation of errors when constructing
coresets from component coresets, and thus needs higher communication cost to achieve the same
approximation ratio.
Acknowledgements This work was supported by ONR grant N00014-09-1-0751, AFOSR grant
FA9550-09-1-0538, and by a Google Research Award. We thank Le Song for generously allowing
us to use his computer cluster.
7
k-means cost ratio
Our Algo
COMBINE
1.15
1.1
1.2
1.2
1.18
1.18
1.16
1.16
1.14
1.14
1.12
1.12
1.1
1.1
1.08
1.08
1.06
1.05
1.6
2
1.8
1.04
2.2
1.06
1.7
1.8
1.9
2
2.1
2.2
?107
k-means cost ratio
(a) random graph, uniform
2.3
?107
(b) random graph, similarity-based
1.04
1.15
1.15
1.1
1.1
1.1
1.05
2
2.2
2.4
2.6
2.8
?106
communication cost
1.8
1.9
2
2.1
2.2
?107
1.05
2
2.2
2.6
2.4
communication cost
(d) grid graph, similarity-based
1.7
(c) random graph, weighted
1.15
1.05
1.6
2.8
?106
(e) grid graph, weighted
2.2
2.6
2.4
2.8
?106
communication cost
(f) preferential graph, degree-based
Figure 2: k-means cost (normalized by baseline) v.s. communication cost over graphs. The titles
indicate the network topology and partition method.
k-means cost ratio
1.5
Our Algo
Zhang et al.
1.4
1.3
1.2
1.1
1
1.6
1.8
2
1.4
1.4
1.35
1.35
1.3
1.3
1.25
1.25
1.2
1.2
1.15
1.15
1.1
1.1
1.05
1.05
1
2.2
1.7
1.8
1.9
2
2.1
2.2
?107
k-means cost ratio
(a) random graph, uniform
2.3
?107
(b) random graph, similarity-based
1
1.4
1.4
1.3
1.3
1.3
1.2
1.2
1.2
1.1
1.1
1.1
2
2.2
2.4
2.6
communication cost
2.8
?106
(d) grid graph, similarity-based
1
2
2.2
2.4
2.6
communication cost
(e) grid graph, weighted
2.8
?106
1.7
1.8
1.9
2
2.1
2.2
?107
(c) random graph, weighted
1.4
1
1.6
1
2.2
2.4
2.6
2.8
communication cost
?106
(f) preferential graph, degree-based
Figure 3: k-means cost (normalized by baseline) v.s. communication cost over the spanning trees of
the graphs. The titles indicate the network topology and partition method.
References
[1] R. Albert and A.-L. Barab?asi. Statistical mechanics of complex networks. Reviews of Modern
Physics, 2002.
8
[2] P. Awasthi and M. Balcan. Center based clustering: A foundational perspective. Survey Chapter in Handbook of Cluster Analysis (Manuscript), 2013.
[3] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In Proceedings of the Conference on Learning Thoery, 2012.
[4] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate aggregation techniques for sensor
databases. In Proceedings of the International Conference on Data Engineering, 2004.
[5] J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, J. Furman, S. Ghemawat, A. Gubarev,
C. Heiser, P. Hochschild, et al. Spanner: Googles globally-distributed database. In Proceedings
of the USENIX Symposium on Operating Systems Design and Implementation, 2012.
[6] H. Daum?e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficient protocols for
distributed classification and optimization. In Algorithmic Learning Theory, pages 154?168.
Springer, 2012.
[7] S. Dutta, C. Gianella, and H. Kargupta. K-means clustering over peer-to-peer networks. In Proceedings of the International Workshop on High Performance and Distributed Mining, 2005.
[8] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In
Proceedings of the Annual ACM Symposium on Theory of Computing, 2011.
[9] D. Feldman, A. Sugaya, and D. Rus. An effective coreset compression algorithm for large scale
sensor networks. In Proceedings of the International Conference on Information Processing
in Sensor Networks, 2012.
[10] G. Forman and B. Zhang. Distributed data clustering can be efficient and exact. ACM SIGKDD
Explorations Newsletter, 2000.
[11] S. Greenhill and S. Venkatesh. Distributed query processing for mobile surveillance. In Proceedings of the International Conference on Multimedia, 2007.
[12] M. Greenwald and S. Khanna. Power-conserving computation of order-statistics over sensor
networks. In Proceedings of the ACM SIGMOD-SIGACT-SIGART Symposium on Principles
of Database Systems, 2004.
[13] S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2004.
[14] E. Januzaj, H. Kriegel, and M. Pfeifle. Towards effective and efficient distributed clustering.
In Workshop on Clustering Large Data Sets in the IEEE International Conference on Data
Mining, 2003.
[15] R. Kannan and S. Vempala. Nimble algorithms for cloud computing. arXiv preprint
arXiv:1304.3162, 2013.
[16] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. A
local search approximation algorithm for k-means clustering. In Proceedings of the Annual
Symposium on Computational Geometry, 2002.
[17] H. Kargupta, W. Huang, K. Sivakumar, and E. Johnson. Distributed clustering using collective
principal component analysis. Knowledge and Information Systems, 2001.
[18] S. Li and O. Svensson. Approximating k-median via pseudo-approximation. In Proceedings
of the Annual ACM Symposium on Theory of Computing, 2013.
[19] Y. Li, P. M. Long, and A. Srinivasan. Improved bounds on the sample complexity of learning.
In Proceedings of the eleventh annual ACM-SIAM Symposium on Discrete Algorithms, 2000.
[20] S. Mitra, M. Agrawal, A. Yadav, N. Carlsson, D. Eager, and A. Mahanti. Characterizing webbased video sharing workloads. ACM Transactions on the Web, 2011.
[21] C. Olston, J. Jiang, and J. Widom. Adaptive filters for continuous queries over distributed data
streams. In Proceedings of the ACM SIGMOD International Conference on Management of
Data, 2003.
[22] D. Tasoulis and M. Vrahatis. Unsupervised distributed clustering. In Proceedings of the International Conference on Parallel and Distributed Computing and Networks, 2004.
[23] Q. Zhang, J. Liu, and W. Wang. Approximate clustering on distributed data streams. In
Proceedings of the IEEE International Conference on Data Engineering, 2008.
9
| 5096 |@word briefly:1 compression:1 polynomial:1 widom:1 heiser:1 pick:1 incurs:1 venkatasubramanian:1 liu:1 outperforms:3 existing:2 surprising:1 si:6 must:1 readily:1 additive:1 partition:11 designed:1 half:1 selected:1 xk:1 fa9550:1 provides:1 node:28 location:1 gx:3 c6:1 zhang:7 five:1 height:5 c2:3 direct:1 become:2 constructed:4 symposium:7 prove:1 overhead:1 combine:7 eleventh:1 yingyu:1 privacy:1 manner:2 mechanic:1 globally:1 decomposed:1 increasing:1 becomes:1 provided:1 estimating:1 underlying:1 bounded:4 unified:1 finding:2 guarantee:4 pseudo:2 every:3 collecting:2 act:1 ti:2 ehrlich:1 grant:2 negligible:1 engineering:2 local:49 mitra:1 consequence:1 aiming:1 mount:1 jiang:1 sivakumar:1 approximately:3 ap:1 merge:1 twice:1 webbased:1 studied:1 bi:9 piatko:1 practice:1 union:7 communicated:2 silverman:1 procedure:1 foundational:1 asi:1 adapting:1 significantly:2 fikes:1 get:4 cannot:1 ga:1 close:2 optimize:1 accumulation:2 equivalent:1 dean:1 center:23 send:4 survey:1 simplicity:1 coreset:81 communicating:2 his:1 ity:1 classic:2 coordinate:1 fx:19 transmit:1 construction:24 suppose:1 exact:1 approximated:1 database:4 observed:3 steven:1 cloud:1 preprint:1 wang:1 yadav:1 connected:3 mz:7 mentioned:1 intuition:2 balanced:1 complexity:9 peled:1 algo:2 basis:1 triangle:3 workload:1 represented:2 chapter:1 thoery:1 effective:3 describe:1 query:2 peer:2 whose:6 larger:3 supplementary:3 otherwise:1 maxp:1 statistic:2 agrawal:1 kmedian:1 propose:4 adaptation:1 relevant:1 combining:3 achieve:3 conserving:1 parent:1 cluster:2 produce:1 develop:1 ij:4 nearest:2 school:1 received:1 pointset:1 predicted:1 implies:1 indicate:2 met:1 filter:1 exploration:1 material:3 require:1 suffices:2 fix:1 preliminary:1 sufficiently:4 algorithmic:1 bj:1 smallest:1 title:2 concurrent:1 tool:1 weighted:19 awasthi:1 clearly:1 sensor:7 generously:1 pnt:1 rather:3 avoid:1 surveillance:2 varying:1 gatech:2 mobile:1 focus:1 maria:1 improvement:3 consistently:1 indicates:1 greatly:3 contrast:1 sigkdd:1 baseline:2 dim:4 entire:6 coordinator:7 subroutine:4 overall:2 among:1 classification:2 initialize:1 equal:5 construct:13 aware:1 sampling:3 unsupervised:1 nearly:1 np:1 saha:1 modern:1 randomly:1 geometry:1 atlanta:1 centralized:8 message:9 mining:2 har:1 subtrees:1 edge:4 preferential:3 tree:11 euclidean:1 divide:1 desired:1 theoretical:4 instance:1 wb:2 cost:126 uniform:6 johnson:1 kargupta:2 eager:1 reported:1 international:8 siam:1 physic:1 picking:1 together:1 connecting:1 central:5 management:1 containing:1 choose:3 broadcast:1 huang:1 leading:1 li:3 lloyd:1 coresets:24 satisfy:1 mp:12 depends:1 stream:2 vi:9 performed:1 root:2 try:1 piece:1 furman:1 doing:1 portion:10 aggregation:1 parallel:2 contribution:3 minimize:3 ni:7 accuracy:4 dutta:1 efficiently:1 raw:1 cc:1 yearpredictionmsd:2 sharing:2 definition:6 ninamf:1 proof:6 di:2 associated:2 sampled:6 gain:2 dataset:2 popular:1 ask:1 knowledge:1 improves:3 manuscript:1 higher:1 methodology:1 improved:1 erd:1 done:3 evaluated:1 furthermore:1 just:2 implicit:1 sketch:4 web:1 o:1 google:2 khanna:1 epstein:1 quality:4 effect:1 concept:1 true:1 normalized:2 wp:3 round:4 rooted:4 byers:1 trying:1 performs:4 newsletter:1 balcan:4 image:1 recently:1 empirically:1 approximates:2 significant:1 feldman:2 phillips:1 rd:4 grid:8 similarly:1 hp:1 dj:1 similarity:8 operating:1 add:1 closest:2 r90:1 own:2 recent:1 perspective:1 quartic:3 n00014:1 inequality:3 onr:1 transmitted:2 preserving:1 captured:1 ii:4 multiple:1 reduces:2 compensate:1 long:1 award:1 barab:1 florina:1 expectation:1 albert:1 arxiv:2 represent:2 achieved:1 receive:1 want:1 fine:1 median:27 sends:1 crucial:1 sigact:1 undirected:1 effectiveness:1 integer:1 call:1 noting:1 iii:1 easy:1 enough:2 topology:9 fm:1 restrict:1 reduce:4 idea:2 tm:1 maxf:1 whether:1 kollios:1 distributing:1 song:1 passing:8 latency:1 detailed:1 amount:5 kanungo:1 category:1 diameter:5 generate:3 exist:1 arising:1 discrete:1 srinivasan:1 key:2 four:1 pb:11 achieving:1 drawn:1 blum:1 prevent:2 pj:1 breadth:1 graph:29 year:1 sum:2 run:3 communicate:6 throughout:1 wu:1 draw:1 bound:11 guaranteed:1 quadratic:3 annual:5 precisely:3 bp:25 x2:1 ri:6 mazumdar:1 min:1 performing:1 vempala:1 according:3 kd:16 across:2 smaller:4 frost:1 separability:1 intuitively:1 restricted:1 pr:1 taken:1 previously:1 mechanism:1 needed:2 know:1 serf:1 available:2 apply:4 observe:1 away:2 appropriate:1 save:1 original:4 top:1 clustering:42 include:1 running:1 completed:1 daum:1 sigmod:2 k1:1 build:4 approximating:3 implied:1 objective:3 question:1 added:1 concentration:1 dependence:3 minx:1 excels:1 distance:1 thank:1 collected:1 spanning:4 provable:2 kannan:2 ru:1 illustration:1 ratio:9 liang:1 unfortunately:1 sigart:1 design:2 implementation:1 collective:1 summarization:1 perform:2 allowing:1 upper:3 datasets:3 finite:1 immediate:1 extended:1 communication:55 mansour:1 arbitrary:2 usenix:1 venkatesh:1 namely:3 c4:1 forman:1 kriegel:1 below:2 summarize:1 spanner:1 including:1 max:1 video:2 power:1 suitable:1 natural:5 rely:1 mn:3 representing:1 improve:2 technology:1 attachment:1 sugaya:1 review:3 carlsson:1 acknowledgement:1 afosr:1 interesting:1 limitation:1 proportional:4 greenhill:1 degree:6 verification:1 proxy:1 principle:1 netanyahu:1 share:1 pi:13 summary:3 supported:1 keeping:1 drastically:1 formal:2 institute:1 neighbor:3 characterizing:1 sparse:1 distributed:56 dimension:4 computes:3 c5:1 adaptive:1 far:2 transaction:1 approximate:11 ignore:1 langberg:1 global:13 proximation:1 handbook:1 conclude:1 thep:1 corbett:1 search:2 continuous:1 svensson:1 additionally:1 inherently:1 complex:1 constructing:2 protocol:1 vj:2 main:2 bounding:1 daume:1 child:1 x1:1 site:16 georgia:1 fashion:1 lie:1 communicates:2 weighting:2 renyi:1 theorem:4 bad:2 specific:1 ghemawat:1 exists:4 workshop:2 olston:1 nk:17 broadcasting:1 highlighting:1 gubarev:1 applies:1 springer:1 relies:1 acm:8 goal:2 greenwald:1 towards:1 shared:1 hard:1 change:1 reducing:3 uniformly:1 lemma:15 principal:1 total:6 multimedia:1 experimental:3 formally:1 wq:20 evaluate:1 tested:1 |
4,527 | 5,097 | Multiclass Total Variation Clustering
Thomas Laurent
Loyola Marymount University
Los Angeles, CA 90045
[email protected]
Xavier Bresson
University of Lausanne
Lausanne, Switzerland
[email protected]
James H. von Brecht
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
David Uminsky
University of San Francisco
San Francisco, CA 94117
[email protected]
Abstract
Ideas from the image processing literature have recently motivated a new set of
clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield
unimpressive results for multiclass clustering tasks. This paper presents a general
framework for multiclass total variation clustering that does not rely on recursion.
The results greatly outperform previous total variation algorithms and compare
well with state-of-the-art NMF approaches.
1
Introduction
Many clustering models rely on the minimization of an energy over possible partitions of the data
set. These discrete optimizations usually pose NP-hard problems, however. A natural resolution
of this issue involves relaxing the discrete minimization space into a continuous one to obtain an
easier minimization procedure. Many current algorithms, such as spectral clustering methods or
non-negative matrix factorization (NMF) methods, follow this relaxation approach.
A fundamental problem arises when using this approach, however; in general the solution of the
relaxed continuous problem and that of the discrete NP-hard problem can differ substantially. In
other words, the relaxation is too loose. A tight relaxation, on the other hand, has a solution that
closely matches the solution of the original discrete NP-hard problem. Ideas from the image processing literature have recently motivated a new set of algorithms [17, 18, 11, 12, 4, 15, 3, 2, 13, 10]
that can obtain tighter relaxations than those used by NMF and spectral clustering. These new algorithms all rely on the concept of total variation. Total variation techniques promote the formation of
sharp indicator functions in the continuous relaxation. These functions equal one on a subset of the
graph, zero elsewhere and exhibit a non-smooth jump between these two regions. In contrast to the
relaxations employed by spectral clustering and NMF, total variation techniques therefore lead to
quasi-discrete solutions that closely resemble the discrete solution of the original NP-hard problem.
They provide a promising set of clustering tools for precisely this reason.
Previous total variation algorithms obtain excellent results for two class partitioning problems
[18, 11, 12, 3] . Until now, total variation techniques have relied upon a recursive bi-partitioning
procedure to handle more than two classes. Unfortunately, these recursive extensions have yet to
produce state-of-the-art results. This paper presents a general framework for multiclass total variation clustering that does not rely on a recursive procedure. Specifically, we introduce a new discrete
multiclass clustering model, its corresponding continuous relaxation and a new algorithm for optimizing the relaxation. Our approach also easily adapts to handle either unsupervised or transductive
1
clustering tasks. The results significantly outperform previous total variation algorithms and compare well against state-of-the-art approaches [19, 20, 1]. We name our approach Multiclass Total
Variation clustering (MTV clustering).
2
The Multiclass Balanced-Cut Model
Given a weighted graph G = (V, W ) we let V = {x1 , . . . , xN } denote the vertex set and W :=
{wij }1?i,j?N denote the non-negative, symmetric similarity matrix. Each entry wij of W encodes
the similarity, or lack thereof, between a pair of vertices. The classical balanced-cut (or, Cheeger
cut) [7, 8] asks for a partition of V = A ? Ac into two disjoint sets that minimizes the set energy
P
Cut(A, Ac )
xi ?A,xj ?Ac wij
Bal(A) :=
=
.
(1)
c
min{|A|, |A |}
min{|A|, |Ac |}
A simple rationale motivates this model: clusters should exhibit similarity between data points,
which is reflected by small values of Cut(A, Ac ), and also form an approximately equal sized partition of the vertex set. Note that min{|A|, |Ac |} attains its maximum when |A| = |Ac | = N/2, so that
for a given value of Cut(A, Ac ) the minimum occurs when A and Ac have approximately equal size.
We generalize this model to the multiclass setting by pursuing the same rationale. For a given
number of classes R (that we assume to be known) we formulate our generalized balanced-cut
problem as
?
R
X
?
Cut(Ar , Acr )
?
Minimize
c
min{?|A
|,
|A
|}
(P)
r
r
r=1
?
?
over all disjoint partitions Ar ? As = ?, A1 ? ? ? ? ? AR = V of the vertex set.
In this model the parameter ? controls the sizes of the sets Ar in the partition. Previous work [4]
has used ? = 1 to obtain a multiclass energy by a straightforward sum of the two-class balanced-cut
terms (1). While this follows the usual practice, it erroneously attempts to enforce that each set in
the partition occupy half of the total number of vertices in the graph. We instead select the parameter
? to ensure that each of the classes approximately occupy the appropriate fraction 1/R of the total
number of vertices. As the maximum of min{?|Ar |, |Acr |} occurs when ?|Ar | = |Acr | = N ? |Ar |,
we see that ? = R ? 1 is the proper choice.
This general framework also easily incorporates a priori known information, such as a set of labels
for transductive learning. If Lr ? V denotes a set of data points that are a priori known to belong
to class r then we simply enforce Lr ? Ar in the definition of an allowable partition of the vertex
set. In other words, any allowable disjoint partition Ar ? As = ?, A1 ? ? ? ? ? AR = V must also
respect the given set of labels.
3
Total Variation and a Tight Continuous Relaxation
We derive our continuous optimization by relaxing the set energy (P) to the continuous energy
E(F ) =
R
X
r=1
kfr kT V
.
kfr ? med? (fr )k 1,?
(2)
Here F := [f1 , . . . , fR ] ? MN ?R ([0, 1]) denotes the N ? R matrix that contains in its columns the
relaxed optimization variables associated to the R clusters. A few definitions will help clarify the
meaning of this formula. The total variation kf kT V of a vertex function f : V ? R is defined by
kf kT V =
n X
n
X
wij |f (xi ) ? f (xj )|.
(3)
i=1 j=1
Alternatively, if we view a vertex function f as a vector (f (x1 ), . . . , f (xN ))t ? RN then we can
write
kf kT V := kKf k1 .
(4)
2
Here K ? MM ?N (R) denotes the gradient matrix of a graph with M edges and N vertices. Each
row of K corresponds to an edge and each column corresponds to a vertex. For any edge (i, j) in
the graph the corresponding row in the matrix K has an entry wij in the column corresponding to
the ith vertex, an entry ?wij in the column corresponding to the j th vertex and zeros otherwise.
To make sense of the remainder of (2) we must introduce the asymmetric `1 -norm. This variant of
the classical `1 -norm gives different weights to positive and negative values:
n
X
?t if t ? 0
kf k1,? =
(5)
|f (xi )|?
where
|t|? =
?t if t < 0.
i=1
Finally we define the ?-median (or quantile), denoted med? (f ), as:
med? (f ) = the (k + 1)st largest value in the range of f , where k = bN/(? + 1)c.
(6)
These definitions, as well as the relaxation (2) itself, were motivated by the following theorem. Its
proof, in the supplementary material, relies only the three preceding definitions and some simple
algebra.
Theorem 1. If f = 1A is the indicator function of a subset A ? V then
kf kT V
2 Cut(A, Ac )
=
.
kf ? med? (f )k 1,?
min {?|A|, |Ac |}
The preceding theorem allows us to restate the original set optimization problem (P) in the equivalent
discrete form
?
R
X
?
kfr kT V
?
Minimize
kf
?
med
(f
)k
(P?)
r
?
r
1,?
r=1
?
?
over non-zero functions f1 , . . . , fR : V ? {0, 1} such that f1 + . . . + fR = 1V .
Indeed, since the non-zero functions fr can take only two values, zero or one, they must define indicator functions of some nonempty set. The simplex constraint f1 + . . . + fR = 1V then guarantees
that the sets Ar := {xi ? V : fr (xi ) = 1} form a partition of the vertex set. We obtain the relaxed
version (P-rlx) of (P?) in the usual manner by allowing fr ? [0, 1] to have a continuous range. This
yields
?
R
X
?
kfr kT V
?
Minimize
kf
?
med
(f
)k
(P-rlx)
r
?
r
1,?
r=1
?
?
over functions f1 , . . . , fR : V ? [0, 1] such that f1 + . . . + fR = 1V .
The following two points form the foundation on which total variation clustering relies:
1 ? As the next subsection details, the total variation terms give rise to quasi-indicator functions.
That is, the relaxed solutions [f1 , . . . , fR ] of (P-rlx) mostly take values near zero or one and exhibit
a sharp, non-smooth transition between these two regions. Since these quasi-indicator functions essentially take values in the discrete set {0, 1} rather than the continuous interval [0, 1], solving (P-rlx)
is almost equivalent to solving either (P) or (P?). In other words, (P-rlx) is a tight relaxation of (P).
2 ? Both functions f 7? kf kT V and f 7? kf ? med? (f )k1,? are convex. The simplex constraint
in (P-rlx) is also convex. Therefore solving (P-rlx) amounts to minimizing a sum of ratios of convex
functions with convex constraints. As the next section details, this fact allows us to use machinery
from convex analysis to develop an efficient, novel algorithm for such problems.
3.1
The Role of Total Variation in the Formation of Quasi-Indicator Functions
To elucidate the precise role that the total variation plays in the formation of quasi-indicator functions, it proves useful to consider a version of (P-rlx) that uses a spectral relaxation in place of the
total variation:
?
R
X
?
kfr kLap
?
Minimize
kf
?
med
(f
)k
(P-rlx2)
r
?
r
1,?
r=1
?
?
over functions f1 , . . . , fR : V ? [0, 1] such that f1 + . . . + fR = 1V
3
Pn
Here kf k2Lap = i=1 wij |f (xi ) ? f (xj )|2 denotes the spectral relaxation of Cut(A, Ac ); it equals
hf, Lf i if L denotes the unnormalized graph Laplacian matrix. Thus problem (P-rlx2) relates
to spectral clustering (and therefore NMF [9]) with a positivity constraint. Note that the only
difference between (P-rlx2) and (P-rlx) is that the exponent 2 appears in k ? kLap while the exponent 1 appears in the total variation. This simple difference of exponent has an important consequence for the tightness of the relaxations. Figure 1 presents a simple example that illuminates this
difference. If we bi-partition the depicted graph, i.e. a line with 20 vertices and edge weights
wi,i+1 = 1, then the optimal cut lies between vertex 10 and vertex 11 since this gives a perfectly balanced cut. Figure 1(a) shows the vertex function f1 generated by (P-rlx) while figure
1(b) shows the one generated by (P-rlx2). Observe that the solution of the total variation model
coincides with the indicator function of the desired cut whereas the the spectral model prefers its
smoothed version. Note that both functions in figure 1a) and 1b) have exactly the same total variation kf kT V = |f (x1 ) ? f (x2 )| + ? ? ? + |f (x19 ) ? f (x20 )| = f (x1 ) ? f (x20 ) = 1 since both
functions are monotonic. The total variation model will therefore prefer the sharp indicator function
since it differs more from its ?-median than the smooth indicator function. Indeed, the denominator
kfr ? med? (fr )k1,? is larger for the sharp indicator function than for the smooth one. A different scenario occurs when we replace the exponent one in k ? kT V by an exponent two, however. As
kf k2Lap = |f (x1 )?f (x2 )|2 +? ? ?+|f (x19 )?f (x20 )|2 and t2 < t when t < 1 it follows thatkf kLap
is much smaller for the smooth function than for the sharp one. Thus the spectral model will prefer
the smooth indicator function despite the fact that it differs less from its ?-median. We therefore
recognize the total variation as the driving force behind the formation of sharp indicator functions.
(a)
(b)
Figure 1: Top: The graph used for both relaxations. Bottom left: the solution given by the total
variation relaxation. Bottom right: the solution given by the spectral relaxation. Position along the
x-axis = vertex number, height along the y-axis = value of the vertex function.
This heuristic explanation on a simple, two-class example generalizes to the multiclass case and
to real data sets (see figure 2). In simple terms, quasi-indicator functions arise due to the fact that
the total variation of a sharp indicator function equals the total variation of a smoothed version of
the same indicator function. The denominator kfr ? med? (fr )k1,? then measures the deviation of
these functions from their ?-median. A sharp indicator function deviates more from its median than
does its smoothed version since most of its values concentrate around zero and one. The energy
is therefore much smaller for a sharp indicator function than for a smooth indicator function, and
consequently the total variation clustering energy always prefers sharp indicator functions to smooth
ones. For bi-partitioning problems this fact is well-known. Several previous works have proven that
the relaxation is exact in the two-class case; that is, the total variation solution coincides with the
solution of the original NP-hard problem [8, 18, 3, 5].
Figure 2 illustrates the result of the difference between total variation and NMF relaxations on the
data set OPTDIGITS, which contains 5620 images of handwritten numerical digits. Figure 2(a)
shows the quasi-indicator function f4 obtained, before thresholding, by our MTV algorithm while
2(b) shows the function f4 obtained from the NMF algorithm of [1]. We extract the portion of each
function corresponding to the digits four and nine, then sort and plot the result. The MTV relaxation
leads a sharp transition between the fours and the nines while the NMF relaxation leads to a smooth
transition.
4
Figure 2: Left: Solution f4 from our MTV algorithm (before thresholding) plotted over the fours
and nines. Right: Solution f4 from LSD [1] plotted over the fours and nines.
3.2
Transductive Framework
From a modeling point-of-view, the presence of transductive labels poses no additional difficulty. In
addition to the simplex constraint
(
)
R
X
fr (xi ) = 1
(7)
F ? ? := F ? MN ?R ([0, 1]) : fr (xi ) ? 0,
r=1
required for unsupervised clustering we also impose the set of labels as a hard constraint. If
L1 , . . . , LR denote the R vertex subsets representing the labeled points, so that xi ? Lr means
xi belongs to class r, then we may enforce these labels by restricting F to lie in the subset
F ? ? := {F ? MN ?R ([0, 1]) : ?r, (f1 (xi ), . . . , fR (xi )) = er ? xi ? Lr } .
(8)
th
Here er denotes the row vector containing a one in the r location and zeros elsewhere. Our model
for transductive classification then aims to solve the problem
)
R
X
kfr kT V
(P-trans)
Minimize
over matrices F ? ? ? ?.
kfr ? med? (fr )k 1,?
r=1
Note that ? ? ? also defines a convex set, so this minimization remains a sum of ratios of convex
functions subject to a convex constraint. Transductive classification therefore poses no additional
algorithmic difficulty, either. In particular, we may use the proximal splitting algorithm detailed in
the next section for both unsupervised and transductive classification tasks.
4
Proximal Splitting Algorithm
This section details our proximal splitting algorithm for finding local minimizers of a sum of ratios
of convex functions subject to a convex constraint. We start by showing in the first subsection that
the functions
T (f ) := kf kT V and B(f ) := kf ? med? (f )1k1,?
(9)
involved in (P-rlx) or (P-trans) are indeed convex. We also give an explicit formula for a subdifferential of B since our proximal splitting algorithm requires this in explicit form. We then summarize
a few properties of proximal operators before presenting the algorithm.
4.1
Convexity, Subgradients and Proximal Operators
Recall that we may view each function f : V ? R as a vector in RN with f (xi ) as the ith component
of the vector. We may then view T and B as functions from RN to R. The next theorem states that
both B and T define convex functions on RN and furnishes an element v ? ?B(f ) by means of an
easily computable formula. The formula for the subdifferential generalizes a related result for the
symmetric case [11] to the asymmetric setting. We provide its proof in the supplementary material.
Theorem 2. The functions B and T are convex. Moreover, given f ? RN the vector v ? RN
defined below belongs to ?B(f ):
?
? 0
?
if f (xi ) > med? (f )
?n = |{xi ? V : f (xi ) = med? (f )}|
??
?
+
v(xi ) = n ??n
where
n? = |{xi ? V : f (xi ) < med? (f )}|
if f (xi ) = med? (f )
n0
?
?
??1
n+ = |{xi ? V : f (xi ) > med? (f )}|
if f (xi ) < med? (f )
5
In the above theorem ?B(f ) denotes the subdifferential of B at f and v ? ?B(f ) denotes a subgradient. Given a convex function A : RN ? R, the proximal operator of A is defined by
1
proxA (g) := argmin A(f ) + ||f ? g||22 .
2
f ?RN
(10)
If we let ?C denote the barrier function of the convex set C, that is
?C (f ) = 0 if f ? C and ?C (f ) = +? if f ?
/ C,
(11)
then we easily see that prox?C is simply the least-squares projection on C, in other words,
prox?C (f ) = projC (f ) := argmin 21 ||f ? g||22 . In this manner the proximal operator defines a
g?C
mapping from RN to RN that generalizes the least-squares projection onto a convex set.
4.2
The Algorithm
We can rewrite the problem (P-rlx) or (P-trans) as
Minimize ?C (F ) +
R
X
E(fr ) over all matrices F = [f1 , . . . , fr ] ? MN ?R
(12)
r=1
where E(fr ) = T (fr )/B(fr ) denotes the energy of the quasi-indicator function of the rth cluster.
The set C = ? or C = ? ? ? is the convex subset of MN ?R that encodes the simplex constraint
(7) or the simplex constraint with labels. The corresponding function ?C (F ), defined in (11), is
the barrier function of the desired set. Beginning from an initial iterate F 0 ? C we propose the
following proximal splitting algorithm:
F k+1 := proxT k +?C (F k + ?B k (F k )).
(13)
Here T k (F ) and B k (F ) denote the convex functions
T k (F ) :=
R
X
ckr T (fr )
B k (F ) :=
the constants
dkr B(fr ),
r=1
r=1
(ckr , dkr )
R
X
are computed using the previous iterate
ckr = ?k /B(frk )
dkr = ?k E(frk )/B(frk )
and
and ?k denotes the timestep for the current iteration. This choice of the constants (ckr , dkr ) yields
B k (F k ) = T k (F k ), and this fundamental property allows us to derive (see supplementary material)
the energy descent estimate:
Theorem 3 (Estimate of the energy descent). Each of the F k belongs to C, and if Brk 6= 0 then
R
X
B k+1
r
r=1
Brk
kF k ? F k+1 k2
Erk ? Erk+1 ?
?k
(14)
where Brk , Erk stand for B(frk ), E(frk ).
Inequality (14) states that the energies of the quasi-indicator functions (as a weighted sum) decrease
at every step of the algorithm. It also gives a lower bound for how much these energies decrease. As
the algorithm progress and the iterates stabilize the ratio Brk+1 /Brk converges to 1, in which case
the sum, rather than a weighted sum, of the individual cluster energies decreases.
Our proximal splitting algorithm (13) requires two steps. The first step requires computing Gk =
F k + ?B k (F k ), and this is straightforward since theorem 2 provides the subdifferential of B, and
therefore of B k , through an explicit formula. The second step requires computing proxT k +?C (Gk ),
which seems daunting at a first glance. Fortunately, minimization problems of this form play an
important role in the image processing literature. Recent years have therefore produced several fast
and accurate algorithms for computing the proximal operator of the total variation. As T k + ?C consists of a weighted sum of total variation terms subject to a convex constraint, we can readily adapt
6
these algorithms to compute the second step of our algorithm efficiently. In this work we use the
primal-dual algorithm of [6] with acceleration. This relies on a proper uniformly convex formulation
of the proximal minimization, which we detail completely in the supplementary material.
The primal-dual algorithm we use to compute proxT k +?C (Gk ) produces a sequence of approximate
solutions by means of an iterative procedure. A stopping criterion is therefore needed to indicate
when the current iterate approximates the actual solution proxT k +?C (Gk ) sufficiently. Ideally, we
would like to terminate F k+1 ? proxT k +?C (Gk ) in such a manner so that the energy descent
property (14) still holds and F k+1 always satisfies the required constraints. In theory we cannot
guarantee that the energy estimate holds for an inexact solution. We may note, however, that a
slightly weaker version of the energy estimate (14)
R
X
B k+1
r
r=1
Brk
kF k ? F k+1 k2
Erk ? Erk+1 ? (1 ? )
?k
(15)
holds after a finite number of iterations of the inner minimization. Moreover, this weaker version
still guarantees that the energies of the quasi-indicator functions decrease as a weighted sum in
exactly the same manner as before. In this way we can terminate the inner loop adaptively: we solve
F k+1 ? proxT k +?C (Gk ) less precisely when F k+1 lies far from a minimum and more precisely as
the sequence {F k } progresses. This leads to a substantial increase in efficiency of the full algorithm.
Our implementation of the proximal splitting algorithm also guarantees that F k+1 always satisfies
the required constraints. We accomplish this task by implementing the primal-dual algorithm in
such a way that each inner iteration always satisfies the constraints. This requires computing the
projection projC (F ) exactly at each inner iteration. The overall algorithm remains efficient provided
we can compute this projection quickly. When C = ? the algorithm [14] performs the required
projection in at most R steps. When C = ? ? ? the computational effort actually decreases, since in
this case the projection consists of a simplex projection on the unlabeled points and straightforward
assignment on the labeled points. Overall, each iteration of the algorithm scales like O(N R2 ) +
O(M R) + O(RN log(N )) for the simplex projection, application of the gradient matrix and the
computation of the balance terms, respectively.
We may now summarize the full algorithm, including the proximal operator computation. In prack
} and any small work well, so we present the
tice we find the choices ?k = max{B1k , . . . , BR
algorithm with these choices. Recall the matrix K in (4) denotes the gradient matrix of the graph.
Algorithm 1 Proximal Splitting Algorithm
Input: F ? C, P = 0, L = ||K||2 , ? = L?1 , = 10?3
while loop not converged do
//Perform outer step Gk = F k + ?B k (F k )
? = maxr B(f
? = h?20 (? ?2 L2 )?1 iF? = F
h r ) ?0 = minri B(fr )
E(f1 )
E(fR )
?
?
DE = diag B(f
, . . . , B(f
DB = diag B(f
, . . . , B(f
1)
1)
R)
R)
V = ?[?B(f1 ), . . . , ?B(fR )]DE (using theorem 2)
G=F +V
//Perform F k+1 ? proxT k +?C (Gk ) until energy estimate holds
while (15) fails do
P? = P + ?K F? DB P = P? / max{|P? |, 1} (both operations entriwise)
F? = F ?
? ? K t P DB F = (F? + ? G)/(1 + ? ) F = projC (F )
? = 1/ 1 + 2? ? = ?? ? = ?/? F? = (1 + ?)F ? ?Fold
end while
end while
5
Fold = F
Numerical Experiments
We now demonstrate the MTV algorithm for unsupervised and transductive clustering tasks. We
selected six standard, large-scale data sets as a basis of comparison. We obtained the first data set
7
(4MOONS) and its similarity matrix from [4] and the remaining five data sets and matrices (WEBKB4, OPTDIGITS, PENDIGITS, 20NEWS, MNIST) from [19]. The 4MOONS data set contains
4K points while the remaining five contain 4.2K, 5.6K, 11K, 20K and 70K points, respectively.
Our first set of experiments compares our MTV algorithm against other unsupervised approaches.
We compare against two previous total variation algorithms [11, 3], which rely on recursive bipartitioning, and two top NMF algorithms [1, 19]. We use the normalized Cheeger cut versions
of [11] and [3] with default parameters. We used the code available from [19] to test each NMF
algorithm. The non-recursive NMF algorithms (LSD [1], NMFR [19]) received two types of initial
data: (a) the deterministic data used in [19]; (b) a random procedure leveraging normalized-cut [16].
Procedure (b) first selects one data point uniformly at random from each computed NCut cluster,
then sets fr equal to one at the data point drawn from the rth cluster and zero otherwise. We then
propagate this initial stage by replacing each fr with (I +L)?1 fr where L denotes the unnormalized
graph Laplacian. Finally, to aid the NMF algorithms, we add a small constant 0.2 to the result (each
performed better than without adding this constant). For MTV we use 30 random trials of (b) then
report the cluster purity (c.f. [19] for a definition of purity) of the solution with the lowest discrete
energy (P). We then use each NMF with exactly the same initial conditions and report simply the
highest purity achieved over all 31 runs. This biases the results in favor of the NMF algorithms.
Due to the non-convex nature of these algorithms, the random initialization gave the best results and
significantly improved upon previously reported results of LSD in particular. For comparison with
[19], initialization (a) is followed by 10,000 iterations of each NMF algorithm. Each trial of (b) is
followed by 2000 iterations of each non-recursive algorithm. The following table reports the results.
Our next set of experiments demonstrate our algorithm in a transductive setting. For each data set
Alg/Data
NCC-TV [3]
1SPEC [11]
LSD [1]
NMFR [19]
MTV
4MOONS
88.75
73.92
99.40
77.80
99.53
WEBKB4
51.76
39.68
54.50
64.32
59.15
OPTDIGITS
95.91
88.65
97.94
97.92
98.29
PENDIGITS
73.25
82.42
88.44
91.21
89.06
20NEWS
23.20
11.49
41.25
63.93
39.40
MNIST
88.80
88.17
95.67
96.99
97.60
we randomly sample either one label per class or a percentage of labels per class from the ground
truth. We then run ten trials of initial condition (b) (propagating all labels instead of one) and report
the purity of the lowest energy solution as before along with the average computational time (for
simple MATLAB code running on a standard desktop) of the ten runs. We terminate the algorithm
once the relative change in energy falls below 10?4 between outer steps of algorithm 1. The table
below reports the results. Note that for well-constructed graphs (such as MNIST), our algorithm
performs remarkably well with only one label per class.
Labels
1
1%
2.5%
5%
10%
4MOONS
99.55/ 3.0s
99.55/ 3.1s
99.55/ 1.9s
99.53/ 1.2s
99.55/ 0.8s
WEBKB4
56.58/ 1.8s
58.75/ 2.0s
57.01/ 1.7s
58.34/ 1.3s
62.01/ 1.2s
OPTDIGITS
98.29/ 7s
98.29/ 4s
98.35/ 3s
98.38/ 2s
98.45/ 2s
PENDIGITS
89.17/ 14s
93.73/ 9s
95.83/ 7s
97.98/ 5s
98.22/ 4s
20NEWS
50.07/ 52s
61.70/ 54s
67.61/ 42s
70.51/ 32s
73.97/ 25s
MNIST
97.53/ 98s
97.59/ 54s
97.72/ 39s
97.79/ 31s
98.05/ 25s
Our non-recursive MTV algorithm vastly outperforms the two previous recursive total variation
approaches and also compares well with state-of-the-art NMF approaches. Each of MTV, LSD and
NMFR perform well on manifold data sets such as MNIST, but NMFR tends to perform best on
noisy, non-manifold data sets. This results from the fact that NMFR uses a costly graph smoothing
technique while our algorithm and LSD do not. We plan to incorporate such improvements into the
total variation framework in future work. Lastly, we found procedure (b) can help overcome the
lack of convexity inherent in many clustering approaches. We plan to pursue a more principled and
efficient initialization along these lines in the future as well. Overall, our total variation framework
presents a promising alternative to NMF methods due to its strong mathematical foundation and
tight relaxation.
Acknowledgements: Supported by NSF grant DMS-1109805, AFOSR MURI grant FA9550-10-10569, ONR grant N000141210040, and Swiss National Science Foundation grant SNSF-141283.
8
References
[1] Raman Arora, M Gupta, Amol Kapila, and Maryam Fazel. Clustering by left-stochastic matrix factorization. In International Conference on Machine Learning (ICML), pages 761?768,
2011.
[2] A. Bertozzi and A. Flenner. Diffuse Interface Models on Graphs for Classification of High
Dimensional Data. Multiscale Modeling and Simulation, 10(3):1090?1118, 2012.
[3] X. Bresson, T. Laurent, D. Uminsky, and J. von Brecht. Convergence and energy landscape for
cheeger cut clustering. In Advances in Neural Information Processing Systems (NIPS), pages
1394?1402, 2012.
[4] X. Bresson, X.-C. Tai, T.F. Chan, and A. Szlam. Multi-Class Transductive Learning based on
`1 Relaxations of Cheeger Cut and Mumford-Shah-Potts Model. UCLA CAM Report, 2012.
[5] T. B?uhler and M. Hein. Spectral Clustering Based on the Graph p-Laplacian. In International
Conference on Machine Learning (ICML), pages 81?88, 2009.
[6] A. Chambolle and T. Pock. A First-Order Primal-Dual Algorithm for Convex Problems with
Applications to Imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[7] J. Cheeger. A Lower Bound for the Smallest Eigenvalue of the Laplacian. Problems in Analysis, pages 195?199, 1970.
[8] F. R. K. Chung. Spectral Graph Theory, volume 92 of CBMS Regional Conference Series in
Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington,
DC, 1997.
[9] Chris Ding, Xiaofeng He, and Horst D Simon. On the equivalence of nonnegative matrix
factorization and spectral clustering. In Proc. SIAM Data Mining Conf, number 4, pages 606?
610, 2005.
[10] C. Garcia-Cardona, E. Merkurjev, A. L. Bertozzi, A. Flenner, and A. G. Percus. Fast multiclass
segmentation using diffuse interface methods on graphs. Submitted, 2013.
[11] M. Hein and T. B?uhler. An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA. In Advances in Neural Information Processing
Systems (NIPS), pages 847?855, 2010.
[12] M. Hein and S. Setzer. Beyond Spectral Clustering - Tight Relaxations of Balanced Graph
Cuts. In Advances in Neural Information Processing Systems (NIPS), 2011.
[13] E. Merkurjev, T. Kostic, and A. Bertozzi. An mbo scheme on graphs for segmentation and
image processing. UCLA CAM Report 12-46, 2012.
[14] C. Michelot. A Finite Algorithm for Finding the Projection of a Point onto the Canonical
Simplex of Rn. Journal of Optimization Theory and Applications, 50(1):195?200, 1986.
[15] S. Rangapuram and M. Hein. Constrained 1-Spectral Clustering. In International conference
on Artificial Intelligence and Statistics (AISTATS), pages 1143?1151, 2012.
[16] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 22(8):888?905, 2000.
[17] A. Szlam and X. Bresson. A total variation-based graph clustering algorithm for cheeger ratio
cuts. UCLA CAM Report 09-68, 2009.
[18] A. Szlam and X. Bresson. Total variation and cheeger cuts. In International Conference on
Machine Learning (ICML), pages 1039?1046, 2010.
[19] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, and Erkki Oja. Clustering by nonnegative
matrix factorization using graph random walk. In Advances in Neural Information Processing
Systems (NIPS), pages 1088?1096, 2012.
[20] Zhirong Yang and Erkki Oja. Clustering by low-rank doubly stochastic matrix decomposition.
In International Conference on Machine Learning (ICML), 2012.
9
| 5097 |@word trial:3 dkr:4 version:8 norm:2 seems:1 simulation:1 propagate:1 bn:1 decomposition:1 unimpressive:1 asks:1 tice:1 initial:5 contains:3 series:1 outperforms:1 current:3 yet:1 must:3 readily:1 numerical:2 partition:10 plot:1 n0:1 half:1 selected:1 spec:1 intelligence:2 desktop:1 beginning:1 ith:2 fa9550:1 lr:5 iterates:1 math:1 provides:1 location:1 five:2 height:1 mathematical:3 along:4 constructed:1 consists:2 doubly:1 manner:4 introduce:2 indeed:3 multi:1 actual:1 provided:1 moreover:2 lowest:2 argmin:2 erk:5 substantially:1 minimizes:1 pursue:1 finding:2 guarantee:4 every:1 exactly:4 k2:2 partitioning:4 control:1 grant:4 szlam:3 positive:1 before:5 local:1 pock:1 tends:1 consequence:1 x19:2 despite:1 lsd:6 laurent:2 approximately:3 webkb4:3 pami:1 pendigits:3 initialization:3 equivalence:1 lausanne:2 relaxing:2 factorization:4 bi:4 range:2 fazel:1 recursive:9 practice:1 lf:1 differs:2 swiss:1 digit:2 procedure:7 significantly:2 projection:9 word:4 onto:2 cannot:1 unlabeled:1 operator:6 equivalent:2 deterministic:1 shi:1 straightforward:3 convex:22 bipartitioning:1 resolution:1 formulate:1 splitting:8 michelot:1 handle:2 variation:38 ckr:4 elucidate:1 play:2 kapila:1 exact:1 us:2 element:1 asymmetric:2 cut:22 muri:1 labeled:2 bottom:2 role:3 rangapuram:1 ding:1 region:2 news:3 brk:6 decrease:5 highest:1 balanced:6 cheeger:7 substantial:1 convexity:2 principled:1 unil:1 ideally:1 cam:3 tight:5 solving:3 algebra:1 rewrite:1 upon:2 efficiency:1 completely:1 basis:1 easily:4 fast:2 artificial:1 formation:4 heuristic:1 supplementary:4 larger:1 solve:2 tightness:1 otherwise:2 favor:1 statistic:1 transductive:10 itself:1 noisy:1 sequence:2 eigenvalue:1 acr:3 propose:1 maryam:1 fr:32 remainder:1 loop:2 adapts:1 los:3 convergence:1 cluster:7 produce:2 converges:1 help:2 derive:2 develop:1 ac:12 propagating:1 pose:3 received:1 progress:2 strong:1 rlx2:4 involves:1 resemble:1 indicate:1 differ:1 switzerland:1 concentrate:1 restate:1 closely:2 f4:4 stochastic:2 material:4 implementing:1 f1:14 tighter:1 extension:2 clarify:1 mm:1 hold:4 around:1 sufficiently:1 ground:1 algorithmic:1 mapping:1 driving:1 smallest:1 proc:1 label:11 largest:1 tool:1 weighted:5 minimization:7 snsf:1 always:4 aim:1 rather:2 pn:1 improvement:1 potts:1 rank:1 greatly:1 contrast:1 attains:1 sense:1 minimizers:1 stopping:1 quasi:10 wij:7 selects:1 issue:1 classification:4 dual:4 overall:3 denoted:1 priori:2 exponent:5 plan:2 art:4 smoothing:1 constrained:1 frk:5 equal:6 once:1 washington:1 unsupervised:5 icml:4 promote:1 future:2 simplex:8 np:5 t2:1 report:8 inherent:1 few:2 randomly:1 oja:2 recognize:1 national:1 individual:1 attempt:1 uhler:2 mining:1 nmfr:5 behind:1 primal:4 kt:12 accurate:1 edge:4 machinery:1 walk:1 desired:2 plotted:2 hein:4 cardona:1 column:4 modeling:2 ar:11 bresson:6 assignment:1 vertex:21 subset:5 entry:3 deviation:1 too:1 reported:1 proximal:15 accomplish:1 adaptively:1 st:1 fundamental:2 international:5 siam:1 rlx:12 quickly:1 von:2 vastly:1 containing:1 positivity:1 conf:1 chung:1 prox:2 de:2 stabilize:1 performed:1 view:4 portion:1 start:1 relied:1 hf:1 sort:1 simon:1 minimize:6 square:2 moon:4 efficiently:1 yield:3 landscape:1 generalize:1 handwritten:1 produced:1 published:1 converged:1 ncc:1 submitted:1 definition:5 inexact:1 against:3 energy:22 involved:1 james:1 thereof:1 dm:1 associated:1 proof:2 recall:2 subsection:2 segmentation:3 actually:1 cbms:1 appears:2 follow:1 reflected:1 improved:1 daunting:1 formulation:1 chambolle:1 stage:1 lastly:1 until:2 lmu:1 hand:1 replacing:1 multiscale:1 nonlinear:1 lack:2 glance:1 defines:2 name:1 concept:2 contain:1 normalized:3 xavier:2 symmetric:2 unnormalized:2 coincides:2 bal:1 generalized:1 illuminates:1 criterion:1 allowable:2 presenting:1 demonstrate:2 performs:2 l1:1 interface:2 image:6 meaning:1 novel:1 recently:2 volume:1 belong:1 he:1 approximates:1 rth:2 projc:3 mathematics:1 similarity:4 add:1 recent:1 chan:1 optimizing:1 belongs:3 scenario:1 zhirong:2 inequality:1 onr:1 minimum:2 additional:2 relaxed:4 preceding:2 impose:1 employed:1 fortunately:1 b1k:1 purity:4 bertozzi:3 relates:1 full:2 smooth:9 match:1 adapt:1 a1:2 laplacian:4 variant:1 denominator:2 essentially:1 vision:1 iteration:7 achieved:1 whereas:1 addition:1 subdifferential:4 remarkably:1 interval:1 median:5 regional:1 subject:3 med:18 db:3 incorporates:1 leveraging:1 near:1 presence:1 yang:2 iterate:3 xj:3 gave:1 brecht:2 perfectly:1 inner:4 idea:2 multiclass:11 computable:1 br:1 angeles:3 motivated:3 six:1 pca:1 setzer:1 effort:1 nine:4 prefers:2 matlab:1 useful:1 detailed:1 eigenproblems:1 amount:1 ten:2 occupy:2 outperform:2 percentage:1 nsf:1 canonical:1 disjoint:3 per:3 discrete:10 write:1 four:4 drawn:1 imaging:2 timestep:1 graph:20 relaxation:24 subgradient:1 fraction:1 sum:9 year:1 run:3 inverse:1 place:1 almost:1 pursuing:1 raman:1 prefer:2 bound:2 followed:2 fold:2 nonnegative:2 mtv:10 precisely:3 constraint:14 x2:2 erkki:2 encodes:2 diffuse:2 ucla:4 erroneously:1 min:6 uminsky:2 subgradients:1 tv:1 smaller:2 slightly:1 wi:1 amol:1 remains:2 previously:1 tai:1 loose:1 nonempty:1 needed:1 end:2 generalizes:3 operation:1 available:1 observe:1 appropriate:1 spectral:15 enforce:3 alternative:1 shah:1 thomas:1 original:4 denotes:12 clustering:30 ensure:1 top:2 remaining:2 running:1 furnishes:1 k1:6 quantile:1 prof:1 classical:2 malik:1 occurs:3 mumford:1 costly:1 usual:2 exhibit:3 gradient:3 onur:1 outer:2 chris:1 manifold:2 reason:1 loyola:1 code:2 ratio:5 minimizing:1 balance:1 x20:3 unfortunately:1 mostly:1 gk:8 hao:1 negative:3 rise:1 implementation:1 motivates:1 proper:2 perform:5 allowing:1 finite:2 descent:3 tele:1 precise:1 dc:1 rn:12 smoothed:3 sharp:11 nmf:17 david:1 pair:1 required:4 california:1 nip:4 trans:3 beyond:1 usually:1 below:3 pattern:1 summarize:2 including:1 max:2 explanation:1 power:1 natural:1 rely:6 force:1 difficulty:2 indicator:24 recursion:1 mn:5 representing:1 scheme:1 usfca:1 axis:2 arora:1 extract:1 deviate:1 literature:3 l2:1 acknowledgement:1 kf:18 relative:1 afosr:1 proxt:7 rationale:2 proven:1 foundation:3 thresholding:2 row:3 elsewhere:2 supported:1 bias:1 weaker:2 fall:1 barrier:2 sparse:1 overcome:1 default:1 xn:2 transition:3 stand:1 horst:1 jump:1 san:2 kfr:9 far:1 transaction:1 approximate:1 maxr:1 francisco:2 xi:25 alternatively:1 continuous:9 iterative:1 table:2 promising:2 terminate:3 nature:1 ca:3 alg:1 excellent:1 diag:2 aistats:1 arise:1 x1:5 board:1 aid:1 fails:1 position:1 explicit:3 lie:3 formula:5 theorem:9 xiaofeng:1 showing:1 er:2 r2:1 gupta:1 mnist:5 restricting:1 adding:1 illustrates:1 chen:1 easier:1 depicted:1 garcia:1 simply:3 ncut:1 monotonic:1 ch:1 corresponds:2 truth:1 satisfies:3 relies:3 sized:1 optdigits:4 dikmen:1 consequently:1 acceleration:1 replace:1 hard:6 change:1 specifically:1 uniformly:2 total:40 select:1 arises:1 incorporate:1 |
4,528 | 5,098 | Learning Multiple Models via Regularized Weighting
Daniel Vainsencher
Department of Electrical Engineering
Technion, Haifa, Israel
[email protected]
Shie Mannor
Department of Electrical Engineering
Technion, Haifa, Israel
[email protected]
Huan Xu
Mechanical Engineering Department
National University of Singapore, Singapore
[email protected]
Abstract
We consider the general problem of Multiple Model Learning (MML) from data,
from the statistical and algorithmic perspectives; this problem includes clustering,
multiple regression and subspace clustering as special cases. A common approach
to solving new MML problems is to generalize Lloyd?s algorithm for clustering
(or Expectation-Maximization for soft clustering). However this approach is unfortunately sensitive to outliers and large noise: a single exceptional point may
take over one of the models.
We propose a different general formulation that seeks for each model a distribution over data points; the weights are regularized to be sufficiently spread out.
This enhances robustness by making assumptions on class balance. We further
provide generalization bounds and explain how the new iterations may be computed efficiently. We demonstrate the robustness benefits of our approach with
some experimental results and prove for the important case of clustering that our
approach has a non-trivial breakdown point, i.e., is guaranteed to be robust to a
fixed percentage of adversarial unbounded outliers.
1
Introduction
The standard approach to learning models from data assumes that the data were generated by a
certain model, and the goal of learning is to recover this generative model. For example, in linear
regression, an unknown linear functional, which we want to recover, is believed to have generated
covariate-response pairs. Similarly, in principal component analysis, a random variable in some
unknown low-dimensional subspace generated the observed data, and the goal is to recover this
low-dimensional subspace. Yet, in practice, it is common to encounter data that were generated by a
mixture of several models rather than a single one, and the goal is to learn a number of models such
that any given data can be explained by at least one of the learned models. It is also common for the
data to contain outliers: data-points that are not well explained by any of the models to be learned,
possibly inserted by external processes.
We briefly explain our approach (presented in detail in the next section). At its center is the problem
of assigning data points to models, with the main consideration that every model be consistent with
many of the data points. Thus we seek for each model a distribution of weights over the data
points, and encourage even weights by regularizing these distributions (hence our approach is called
Regularized Weighting; abbreviated as RW). A data point that is inconsistent with all available
models will receive lower weight and even sometimes be ignored. The value of ignoring difficult
points is illustrated by contrast with the common approach, which we consider next.
1
The arguably most widely applied approach for multiple model learning is the minimum loss approach, also known as Lloyd?s algorithm [1] in clustering, where the goal is to find a set of models,
associate each data point to one model (in so called ?soft? variations, one or more models), such
that the sum of losses over data points is minimal. Notice that in this approach, every data point
must be explained by some model. This leaves the minimum loss approach vulnerable to outliers
and corruptions: If one data point goes to infinity, so must at least one model.
Our remedy to this is relaxing the requirement that each data point must be explained. Indeed,
as we show later, the RW formulation is provably robust in the case of clustering, in the sense of
having non-zero breakdown point [2]. Moreover, we also establish other desirable properties, both
computational and statistical, of the proposed method. Our main contributions are:
1. A new formulation of the sub-task of associating data points to models as a convex optimization problem for setting weights. This problem favors broadly based models, and
may ignore difficult data points entirely. We formalize such properties of optimal solutions
through analysis of a strongly dual problem. The remaining results are characteristics of
this approach.
2. Outlier robustness. We show that the breakdown point of the proposed method is bounded
away from zero for the clustering case. The breakdown point is a concept from robust
statistics: it is the fraction of adversarial outliers that an algorithm can sustain without
having its output arbitrarily changed.
3. Robustness to fat tailed noise. We show, empirically on a synthetic and real world datasets,
that our formulation is more resistant to fat tailed additive noise.
4. Generalization. Ignoring some of the data, in general, may lead to overfitting. We show that
when the parameter ? (defined in Section 2) is appropriately set, this essentially does not
occur. We prove this through uniform convergence bounds resilient to the lack of efficient
algorithms to find near-optimal solutions in multiple model learning.
5. Computational complexity. As almost every method to tackle the multiple model learning
problem, we use alternating optimization of the models and the association (weights), i.e.,
we iteratively optimize one of them while fixing the other. Our formulation for optimizing
the association requires solving a quadratic problem in kn variables, where k is the number
of models and n is the number of points. Compared to O(kn) steps for some formulations,
this seems expensive. We show how to take advantage of the special problem structure and
repetition in the alternating optimization subproblems to reduce this cost.
1.1
Relation to previous work
Learning multiple models is by no means a new problem. Indeed, special examples of multi-model
learning have been studied, including k-means clustering [3, 4, 5] (and many other variants thereof),
Gaussian mixture models (and extensions) [6, 7] and subspace segmentation problem [8, 9, 10]; see
Section 2 for details. Fewer studies attempt to cross problem type boundaries. A general treatment
of the sample complexity of problems that can be interpreted as learning a code book (which encompasses some types of multiple model learning) is [11]. Slightly closer to our approach is [12],
whose formulation generalizes a common approach to different model types and permits for problem specific regularization, giving both generalization results and algorithmic iteration complexity
results. A probabilistic and generic algorithmic approach to learning multiple models is Expectation
Maximization [13].
Algorithms for dealing with outliers and multiple models together have been proposed in the context
of clustering [14]. Reference [15] provides an example of an algorithm for outlier resistance in
learning a single subspace, and partly inspires the current work. In contrast, we abstract almost
completely over the class of models, allowing both algorithms and analysis to be easily reused to
address new classes.
2
Formulation
In this section we show how multi-model learning problems can be formed from simple estimation
problem (where we seek to explain weighted data points by a single model), and imposing a par2
ticular joint loss. We contrast the joint loss proposed here to a common one through the weights
assigned by each and their effects on robustness.
n
We refer throughout to n data points from X by (xi )i=1 = X ? X n , which we seek to explain
k
by k models from M denoted (mj )j=1 = M ? Mk . A data set may be weighted by a set of k
k
k
distributions (wj )j=1 = W ? (?n ) where ?n ? Rn is the simplex.
Definition 1. A base weighted learning problem is a tuple (X , M, ?, A), where ? : X ? M ? R+
is a non-negative convex function, which we call a base loss function and A : ?n ? X n ? M
defines an efficient algorithmPfor choosing a model. Given the weight w and data X, A obtains
n
low weighted empirical loss i=1 wi ? (xi , m) (the weighted empirical loss need not be minimal,
allowing for regularization which we do not discuss further).
n
We will often denote the losses of a model m over X as a vector l = (?(xi , m))i=1 . In the context
of a set of models M , we similarly associate the loss vector lj and the weight vector wj with the
model mj ; this allows us to use the terse notation wj? lj for the weighted loss of model j.
Given a base weighted learning problem, one may pose a multi-model learning problem
Example 1. The multi-model learning problem covers many examples, here we list a few:
? In k-means clustering, the goal is to partition the training samples into k subsets, where
each subset of samples is ?close? to their mean.
In our terminology, a multi-model
learning
2
d
d
problem where the base learning problem is R , R , (x, m) 7? kx ? mk2 , A where A
finds the weighted mean of the data. The weights allow us to compute each cluster center
according to the relevant subset of points.
? In subspace clustering, also known as subspace segmentation, the objective is to group
the training samples into subsets, such that each subset can be well approximated by a
low-dimensional affine subspace. This is a multi-model learning problem where the corresponding single-model learning problem is PCA.
? Regression clustering [16] extends the standard linear regression problem in that the training samples cannot be explained by one linear function. Instead, multiple linear function
are sought, so that the training samples can be split into groups, and each group can be
approximated by one linear function.
? Gaussian Mixture Model considers the case where data points are generated by a mixture
of a finite number of Gaussian distributions, and seeks to estimate the mean and variance
of each of these distribution, and simultaneously to group the data points according to the
distribution that generates it. This is a multi-model learning problem where the respective
single model learning problem is estimating the mean and variance of a distribution.
The most common way to tackle the multiple model learning problem is the minimum loss approach,
i.e, to minimize the following joint loss
L (X, M ) =
1 X
min ? (x, m) .
m?M
n
(2.1)
x?X
In terms of weighted base learning problems, each model gives equal weight to all points for which
2
it is the best (lowest loss) model. For example, when M = X = Rn with ?(x, m) = kx ? mk2
the squared Euclidean distance loss yields k means clustering. In this context, alternating between
choosing for each x its loss minimizing model, and adjusting each model to minimized the squared
Euclidean loss, yields Lloyd?s algorithm (and its generalizations for other problems).
The minimum loss approach requires that every point is assigned to a model, this can potentially
cause problems in the presence of outliers. For example, consider the clustering case where the data
contain a single outlier point xi . Let xi tend to infinity; there will always be some mj that is closest
to xi , and is therefore (at equilibrium) the average of xi and some other data points. Then mj will
tend to infinity also. We call this phenomenon mode I of sensitivity to outliers; it is common also
3
to such simple estimators as the mean. Mode II of sensitivity is more particular: as mj follows xi
to infinity, it stops being the closest to any other points, until the model is associated only to the
outlier and thus matches it perfectly. Thus under Eq. (2.1) outliers tend to take over models. Mode
II of sensitivity is not clustering specific, and Fig. 2.1 provides an example in multiple regression.
Neither mode is avoided by spreading a point?s weight among models as in mixture models [6].
To overcome both modes of sensitivity, we propose a different joint loss, in which the hard constraint
is only that for each model we produce a distribution over data points. A penalty term discourages
the concentration of a model on few points and thus mode II sensitivity. Deweighting difficult points
helps mitigate mode I. For clustering this robustness is formalized in Theorem 2.
Robust and Lloyds association methods, quadratic regression.
?0.5
?1.0
?1.5
?2.0
?2.5
Data
Minimum loss 0.20 correct on 34 points
Minimum loss 0.20 correct on 4 points
Robust joint loss 0.20 correct on 29 points
Robust joint loss 0.20 correct on 37 points
?3.0
?3.5
0.0
0.2
0.4
0.6
0.8
Figure 2.1: Data is a mixture of two quadratics, with positive fat tailed noise. Under a minimum loss
approach an off-the-chart high-noise point suffices to prevent the top broken line from being close
to many other data points. Our approach is free to better model the bulk of data. We used a robust
(mean absolute deviation) criterion to choose among the results of multiple restarts for each model.
Definition 2. Let u ? ?n be the uniform distribution. Given k weight vectors, we denote their averPk
age v (W ) = k ?1 j=1 wj , and just v when W is clear from context. The Regularized Weighting
k
multiple model learning loss is a function L? : X n ? Mk ? (?n ) ? R defined as
L? (X, M, W ) = ? ku ?
2
v (W )k2
+k
?1
k
X
l?
j wj
(2.2)
j=1
which in particular defines the weight setting subproblem:
L? (X, M ) =
min
W ?(?n )k
L? (X, M, W ) .
(2.3)
As its name suggests, our formulation regularizes distributions of weight over data points; specifically, wj are controlled by forcing their average v to be close to the uniform distribution u. Our goal
is for each model to represent many data points, so weights should not be concentrated. We avoid
this by penalizing squared Euclidean distance from uniformity, which emphasizes points receiving
weight much higher than the natural n?1 , and essentially ignores small variations around n?1 . The
effect is later formalized in Lemma 1, but to illustrate we next calculate the penalties for two stylized
cases. This will also produce the first of several hints about the appropriate range of values for ?.
In the following examples, we will consider a set of ?nk ?1 data points, recalling that nk ?1 is the
natural number of points per model. To avoid letting a few high loss outliers skew our models (mode
I of sensitivity), we prefer instead to give them zero weight. Take ? ? k/2, then the cost of ignoring
some ?nk ?1 points in all models is at most ?n?1 ? 2?k ?1 ? ?n?1 . In contrast, basing a model
4
1.6
Clustering in 1D w. varied class size. ?/n = 0.335
model 1 weighs 21 points
model 2 weighs 39 points
Weight assigned by model, scaled: wj,i ? n/k
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
?0.2
?0.6
?0.4
?0.2
0.0
Location
0.2
0.4
0.6
Figure 2.2: For each location (horizontal) of a data point, the vertical locations of corresponding
markers gives the weights assigned by each model. The left cluster is half as populated as the right,
thus must give weights about twice as large. Within each model, weights are affine in the loss
(see Section 2.1), causing the concave parabolas. The gap allowed between the maximal weights
of different models allows a point from the right cluster to be adopted by the left model, lowering
overall penalty at a cost to weighted losses.
on very few points (mode II of sensitivity) should be avoided. If the jth model is fit to only ?nk ?1
points for ? ? 1, the penalty from those points will be at least (approximately) ?n?1 ? ? ?1 k ?1 .
We can make the first situation cheap and the second expensive (per model) in comparison to the
empirical weighted loss term by choosing
?1
?n
?k
?1
k
X
wj? lj .
(2.4)
j=1
On the flip side, highly unbalanced classes in the data can be challenging to our approach. Consider
the case where a model has low loss for fewer than n/(2k) points: spreading its weight only over
them can incur very high costs due to the regularization term, which might be lowered by including
some higher-loss points that are indeed better explained by another model (see Figure 2.2 on page
5 for an illustration). This challenge might be solved by explicitly and separately estimating the
relative frequencies of the classes, and penalizing deviations from the estimates rather than from
equal frequencies, as is done in mixture models [6]; this is left for future study.
2.1
Two properties of Regularized Weighting
Two properties of our formulation result from an analysis (in Appendix A for lack of space) of a
dual problem of the weight setting problem (2.3). These provide the basis for later theory by relating
v, losses and ?. The first illustrates the uniform control of v:
Lemma 1. Let all losses be in [0, B], then in an optimal solution to (2.3), we have
kv ? uk? ? B/ (2?) .
This strengthens the conclusion of (2.4): if outliers are present and ?n?1 > 2B where B bounds
losses on all points including outliers, weights will be almost uniform (enabling mode I of sensi5
tivity). On the positive side, this lemma plays an important role in the generalization and iteration
complexity results presented in the sequel. A more detailed view of vi for individual points is
provided by the second property.
By PC we denote the orthogonal projection mapping into a convex set C.
Lemma 2. For an optimal solution to (2.3), there exists t ? Rk such that:
v = P?n u ? min (lj ? tj ) / (2?) ,
j
where minj should be read as operating element-wise, and in particular wj,i > 0 implies that j
minimizes the ith element.
This establishes that average weight (when positive) is affine in the loss; the concave parabolas
visible in Figure 2.2 on page 5 are an example. We also learn the role of ? in solutions is determining
the coefficient in the affine relation. Distinct t allow for different densities of points around different
models. One observation from this lemma is that if a particular model j gives weight to some point i,
then every point with lower loss ? (xi? , mj ) under that model will receive at least that much weight.
This property plays a key role in the proof of robustness to outliers in clustering.
2.2
An alternating optimization algorithm
The RW multiple model learning loss, like other MML losses, is not convex. However the weight
setting problem (2.3) is convex when we fix the models, and an efficient procedure A is assumed
for solving a weighted base learning problem for a model, supporting an alternating optimization
approach, as in Algorithm 1; see Section 5 for further discussion.
Data: X
Result: The model-set M
M ? initialM odels (X);
repeat
M? ? M;
W ? arg minW ? L? (X, M, W ? );
mj ? A (wj , X) (?j ? [k]) ;
until L (X, M ? ) ? L (X, M ) < ?;
Algorithm 1: Alternating optimization for Regularized Weighting
3
Breakdown point in clustering
Our formulation allows a few difficult outliers to be ignored if the right models are found; does this
happen in practice? Figure 2.1 on page 4 provides a positive example in regression clustering, and
a more substantial empirical evaluation on subspace clustering is in Appendix B. In the particular
case of clustering with the squared Euclidean loss, robustness benefits can be proved.
We use ?breakdown point? ? the standard robustness measure in the literature of robust statistics [2],
see also [17, 18] and many others ? to quantify the robustness property of the proposed formulation. The breakdown point of an estimator is the smallest fraction of bad observations that can
cause the estimator to take arbitrarily aberrant values, i.e., the smallest fraction of outliers needed to
completely break an estimator.
For the case of clustering with the squared Euclidean distance base loss, the min-loss approach
corresponds to k-means clustering which is not robust in this sense; its breakdown point is 0. The
non robustness of k-means has led to the development of many formulations of robust clustering,
see a review by [14]. In contrast, we show that our joint loss yields an estimator that has a non-zero
breakdown point, and is hence robust.
In general, a squared loss clustering formulation that assigns equal weight to different data points
cannot be robust ? as one data point tends to infinity so must at least one model. This applies to
our model if ? is allowed to tend to infinity. On the other hand if ? is too low, it becomes possible
6
for each model to assign all of its weight to a single point, which may well be an outlier tending
to infinity. Thus, it is well expected that the robustness result below requires ? to belong to a data
dependent range.
Theorem 2. Let X = M be a Euclidean space in which we perform clustering with the loss
2
? (xi , mj ) = kmj ? xi k and k centers. Denote by R the radius of any ball containing the inliers,
?2
and ? < k /22 the proportion of outliers allowed to be outside the ball. Denote also by r a radius
such that there exists M ? = {m?1 , ? ? ? , m?k } such that each inlier is within a distance r of some
model m?j and each mj approximates (i.e., within a distance r) at least n/(2k) inliers; this always
holds for some r ? R.
For any ? ? n r2 , 13R2 let (M, W ) be minimizers of L? (X, M, W ). Then we have
kmj ? xi k2 ? 6R for every model mj and inlier xi .
Theorem 2 shows that when the number of outliers is not too high, then the learned model, regardless
of the magnitude of the outliers, is close to the inliers and hence cannot be arbitrarily bad. In
particular, the theorem implies a non-zero breakdown point for any ? > nr2 ; taking too high an
? merely forces a larger but still finite R. If the inliers are amenable to balanced clustering so that
r ? R, the regime of non-zero breakdown is extended to smaller ?.
The proof follows three steps. First, due to the regularization term, for any model, the total weight
on the few outliers is at most 1/3. Second, an optimal model must thus be at least twice as close to
the weighted average of its inlier as it is to the weighted average of its outliers. This step depends
critically on squared Euclidean loss being used. Lastly, this gap in distances cannot be large in
absolute terms, due to Lemma 2; an outlier that is much farther from the model than the inliers must
receive weight zero. For the proof see Appendix C of the supplementary material.
4
Regularized Weighting formulation sample complexity
An important consideration in learning algorithms is controlling overfitting, in which a model is
found that is appropriate for some data, rather than for the source that generates the data. The
current formulation seems to be particularly vulnerable since it allows data to be ignored, in contrast
to most generalization bounds that assume equal weight is given to all data.
Our loss L? (X, M ) differs from common losses in allowing data points to be differently weighted.
Thus, to obtain the sample complexity of our formulation we need to bound the difference that a
single sample can make to the loss. For a common empirical average loss this is bounded by Bn?1
where B is the maximal value of the non-negative loss on a single data point, and in our case by
B kvk? , because if X, X ? differ only on the ith element, then:
k
k
X
X
?1
?
?
wj,i ? Bvi .
wj,i lj,i ? lj,i ? Bk ?1
|L? (X , M, W ) ? L? (X, M, W )| = k
j=1
j=1
Whenever W is optimal with respect to either X or X ? , Lemma 1 provides the necessary bound
on kvk? . Along with covering numbers as defined next and standard arguments (found in the
supplementary material), this bound on differences provides us with the desired generalization result.
Definition 3 (Covering numbers for multiple models). We shall endow Mk with the metric
d? (M, M ? ) = max
? (?, mj ) ? ? ?, m?j
?
j?[k]
and
S define its covering number N? M
M ?Mk B(M, ?).
?
k
as the minimal cardinality of a set Mk? such that Mk ?
The bound depends on an upper bound on base losses denoted B; this should be viewed as fixing
a scale for the losses and is standard where losses are not naturally bounded (e.g., classical bounds
on SVM kernel regression [19] use bounded kernels). Thus, we have the following generalization
result, whose proof can be found in Appendix D of the supplementary material.
7
Theorem 3. Let the base losses be bounded in the interval [0, B], let Mk have covering num
dk
bers N? nMk ? (C/?) and o
let ? = nB/ (2?). Then we have with probability at least
2n? 2
2C
1 ? exp dk log ? ? B 2 (1+?)2 :
?M ? Mk
5
|L? (X, M ) ? EX ? ?Dn L? (X ? , M )| ? 3?.
The weight assignment optimization step
As is typical in multi-model learning, simultaneously optimizing the model and the association of
the data (in our formulation, the weight) is computationally hard [20], thus Algorithm 1 alternates
between optimizing the weight with the model fixed, and optimizing the model with the weights
fixed. Thus we show how to efficiently solve a sequence of weight setting problems, minimizing
L? (X, Mi , W ) over W , where Mi typically converge.
We propose to solve each instance of weight setting using gradient methods, and in particular FISTA
[21]. This has two advantages compared to Interior Point methods: First, the use of memory for
gradient methods depends only linearly with respect to the dimension, which is O(kn) in problem
(2.3), allowing scaling to large data sets. Second, gradient methods have ?warm start? properties:
the number of iterations required is proportional to the distance between the initial and optimal
solutions, which is useful both due to bounds on kv ? uk? and when Mi converge.
Theorem 4. Given data and models (X, M ) there exists
that finds a weight matrix W
p an algorithm
such that L? (X, M, W ) ? L? (X, M ) ? ? using O
k?/? iterations, each costing O(kn) time
p
and memory. If ? ? Bn/4 then O k ?n?1 /? iterations suffice.
The first bound might suggest that typical settings of ? ? n requires iterations to increase with the
number of points n; the second bounds shows this is not always necessary.
This result can be realized by applying the algorithm FISTA, with a starting point wj = u, with
2?k ?2 as a bound on the Lipschitz constant for the gradient. For the first bound we estimate the
distance from u by the radius of the product of k simplices; for the second we use Lemma 1 in Appendix E.
6
Conclusion
In this paper, we proposed and analyzed, from a general perspective, a new formulation for learning
multiple models that explain well much of the data. This is based on associating to each model a
regularized weight distribution over the data it explains well. A main advantage of the new formulation is its robustness to fat tailed noise and outliers: we demonstrated this empirically for regression
clustering and subspace clustering tasks, and proved that for the important case of clustering, the
proposed method has a non-trivial breakdown point, which is in sharp contrast to standard methods such as k-means. We further provided generalization bounds and explained an optimization
procedure to solve the formulation in scale.
Our main motivation comes from the fast growing attention to analyzing data using multiple models,
under the names of k-means clustering, subspace segmentation, and Gaussian mixture models, to
list a few. While all these learning schemes share common properties, they are largely studied
separately, partly because these problems come from different sub-fields of machine learning. We
believe general methods with desirable properties such as generalization and robustness will supply
ready tools for new applications using other model types.
Acknowledgments
H. Xu is partially supported by the Ministry of Education of Singapore through AcRF Tier Two
grant R-265-000-443-112 and NUS startup grant R-265-000-384-133. This research was funded (in
part) by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
8
References
[1] S. Lloyd. Least squares quantization in PCM. Information Theory, IEEE Transactions on,
28(2):129?137, 1982.
[2] P. J. Huber. Robust Statistics. John Wiley & Sons, New York, 1981.
[3] J.A. Hartigan and M.A. Wong. Algorithm AS 136: A k-means clustering algorithm. Journal
of the Royal Statistical Society. Series C (Applied Statistics), 28(1):100?108, 1979.
[4] R. Ostrovsky, Y. Rabani, L.J. Schulman, and C. Swamy. The effectiveness of Lloyd-type
methods for the k-means problem. In Foundations of Computer Science, 2006. FOCS?06. 47th
Annual IEEE Symposium on, pages 165?176. IEEE, 2006.
[5] P. Hansen, E. Ngai, B.K. Cheung, and N. Mladenovic. Analysis of global k-means, an incremental heuristic for minimum sum-of-squares clustering. Journal of classification, 22(2):287?
310, 2005.
[6] G. J. McLachlan and K. E. Basford. Mixture Models: Inference and Applications to Clustering.
Marcel Dekker, New York, 1998.
[7] Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In FOCS
2010: Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science,
pages 103?112. IEEE Computer Society, 2010.
[8] G. Chen and M. Maggioni. Multiscale geometric and spectral analysis of plane arrangements.
In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2825?
2832. IEEE, 2011.
[9] Yaoliang Yu and Dale Schuurmans. Rank/norm regularization with closed-form solutions:
Application to subspace clustering. In Fabio Gagliardi Cozman and Avi Pfeffer, editors, UAI,
pages 778?785. AUAI Press, 2011.
[10] M. Soltanolkotabi and E.J. Cand`es. A geometric analysis of subspace clustering with outliers.
Arxiv preprint arXiv:1112.4258, 2011.
[11] A. Maurer and M. Pontil. k-dimensional coding schemes in hilbert spaces. Information Theory,
IEEE Transactions on, 56(11):5839?5846, 2010.
[12] A.J. Smola, S. Mika, B. Sch?olkopf, and R.C. Williamson. Regularized principal manifolds.
The Journal of Machine Learning Research, 1:179?209, 2001.
[13] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society. Series B, 39(1):1?38, 1977.
[14] R.N. Dav?e and R. Krishnapuram. Robust clustering methods: a unified view. Fuzzy Systems,
IEEE Transactions on, 5(2):270?293, 1997.
[15] Huan Xu, Constantine Caramanis, and Shie Mannor. Outlier-robust PCA: The highdimensional case. IEEE transactions on information theory, 59(1):546?572, 2013.
[16] B. Zhang. Regression clustering. In Data Mining, 2003. ICDM 2003. Third IEEE International
Conference on, pages 451?458. IEEE, 2003.
[17] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. John Wiley &
Sons, New York, 1987.
[18] R. A. Maronna, R. D. Martin, and V. J. Yohai. Robust Statistics: Theory and Methods. John
Wiley & Sons, New York, 2006.
[19] Olivier Bousquet and Andr?e Elisseeff. Stability and generalization. The Journal of Machine
Learning Research, 2:499?526, 2002.
[20] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is np-hard.
WALCOM: Algorithms and Computation, pages 274?285, 2009.
[21] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[22] Roberto Tron and Ren?e Vidal. A benchmark for the comparison of 3-d motion segmentation
algorithms. In CVPR. IEEE Computer Society, 2007.
[23] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1ball for learning in high dimensions. In Proceedings of the 25th international conference on
Machine learning, pages 272?279, 2008.
9
| 5098 |@word briefly:1 polynomial:1 norm:1 seems:2 proportion:1 reused:1 dekker:1 seek:5 bn:2 elisseeff:1 initial:1 mpexuh:1 series:2 daniel:1 gagliardi:1 current:2 aberrant:1 yet:1 assigning:1 must:7 john:3 additive:1 partition:1 visible:1 happen:1 cheap:1 generative:1 leaf:1 fewer:2 half:1 intelligence:1 plane:1 ith:2 farther:1 num:1 provides:5 mannor:2 location:3 zhang:1 unbounded:1 along:1 dn:1 supply:1 symposium:2 focs:2 prove:2 huber:1 expected:1 indeed:3 cand:1 growing:1 multi:8 cardinality:1 becomes:1 provided:2 estimating:2 moreover:1 bounded:5 notation:1 suffice:1 lowest:1 israel:2 dav:1 interpreted:1 minimizes:1 fuzzy:1 unified:1 mitigate:1 every:6 auai:1 concave:2 tackle:2 fat:4 k2:2 ostrovsky:1 scaled:1 uk:2 control:1 grant:2 arguably:1 positive:4 engineering:3 tends:1 analyzing:1 approximately:1 might:3 mika:1 twice:2 studied:2 suggests:1 relaxing:1 challenging:1 range:2 acknowledgment:1 practice:2 differs:1 procedure:2 pontil:1 empirical:5 projection:2 suggest:1 krishnapuram:1 varadarajan:1 cannot:4 close:5 interior:1 onto:1 nb:1 context:4 applying:1 wong:1 optimize:1 demonstrated:1 center:3 go:1 regardless:1 starting:1 attention:1 convex:5 formalized:2 assigns:1 estimator:5 stability:1 maggioni:1 variation:2 controlling:1 play:2 olivier:1 associate:2 element:3 expensive:2 approximated:2 strengthens:1 particularly:1 recognition:1 breakdown:12 parabola:2 pfeffer:1 observed:1 inserted:1 subproblem:1 role:3 preprint:1 electrical:2 solved:1 calculate:1 wj:13 substantial:1 balanced:1 dempster:1 broken:1 complexity:6 uniformity:1 solving:3 kmj:2 incur:1 completely:2 basis:1 easily:1 joint:7 stylized:1 differently:1 cozman:1 tx:1 caramanis:1 distinct:1 fast:2 startup:1 avi:1 choosing:3 outside:1 shalev:1 whose:2 heuristic:1 widely:1 larger:1 supplementary:3 solve:3 cvpr:2 favor:1 statistic:5 terse:1 laird:1 advantage:3 sequence:1 propose:3 maximal:2 product:1 causing:1 relevant:1 kv:2 olkopf:1 convergence:1 cluster:3 requirement:1 produce:2 incremental:1 inlier:3 help:1 illustrate:1 ac:2 pose:1 fixing:2 bers:1 eq:1 marcel:1 implies:2 come:2 quantify:1 differ:1 radius:3 correct:4 material:3 education:1 explains:1 resilient:1 assign:1 suffices:1 generalization:11 fix:1 extension:1 hold:1 sufficiently:1 around:2 exp:1 equilibrium:1 algorithmic:3 mapping:1 sought:1 smallest:2 estimation:1 spreading:2 hansen:1 sensitive:1 repetition:1 exceptional:1 basing:1 establishes:1 weighted:15 odels:1 tool:1 mclachlan:1 gaussian:4 always:3 rather:3 avoid:2 shrinkage:1 endow:1 rank:1 likelihood:1 contrast:7 adversarial:2 sense:2 inference:1 dependent:1 minimizers:1 lj:6 typically:1 yaoliang:1 relation:2 provably:1 overall:1 dual:2 classification:1 among:2 denoted:2 arg:1 development:1 special:3 equal:4 field:1 having:2 yu:1 future:1 simplex:1 minimized:1 others:1 np:1 hint:1 few:7 belkin:1 simultaneously:2 national:1 individual:1 beck:1 attempt:1 recalling:1 detection:1 highly:1 mining:1 evaluation:1 mixture:9 analyzed:1 kvk:2 pc:1 inliers:5 tj:1 amenable:1 tuple:1 encourage:1 closer:1 necessary:2 huan:2 respective:1 orthogonal:1 minw:1 incomplete:1 euclidean:7 maurer:1 haifa:2 desired:1 weighs:2 minimal:3 mk:8 sinha:1 instance:1 soft:2 teboulle:1 cover:1 assignment:1 maximization:2 cost:4 nr2:1 deviation:2 subset:5 uniform:5 technion:4 inspires:1 too:3 kn:4 synthetic:1 st:1 density:1 international:2 sensitivity:7 siam:1 sequel:1 probabilistic:1 off:1 receiving:1 together:1 squared:7 containing:1 choose:1 possibly:1 external:1 book:1 lloyd:6 coding:1 includes:1 coefficient:1 explicitly:1 vi:1 depends:3 later:3 view:2 break:1 closed:1 start:1 recover:3 contribution:1 minimize:1 square:2 il:2 formed:1 chart:1 variance:2 characteristic:1 efficiently:2 largely:1 yield:3 generalize:1 emphasizes:1 critically:1 ren:1 corruption:1 collaborative:1 explain:5 minj:1 whenever:1 definition:3 frequency:2 thereof:1 naturally:1 associated:1 proof:4 vainsencher:1 mi:3 basford:1 stop:1 proved:2 treatment:1 adjusting:1 segmentation:4 formalize:1 hilbert:1 nimbhorkar:1 higher:2 restarts:1 planar:1 response:1 sustain:1 formulation:21 done:1 strongly:1 just:1 smola:1 lastly:1 until:2 hand:1 horizontal:1 multiscale:1 marker:1 lack:2 acrf:1 defines:2 mode:10 icri:1 believe:1 name:2 effect:2 contain:2 concept:1 remedy:1 hence:3 regularization:5 assigned:4 alternating:6 read:1 iteratively:1 mahajan:1 illustrated:1 kaushik:1 covering:4 criterion:1 demonstrate:1 tron:1 duchi:1 motion:1 wise:1 consideration:2 common:11 discourages:1 functional:1 tending:1 empirically:2 ngai:1 association:4 belong:1 approximates:1 relating:1 refer:1 imposing:1 populated:1 similarly:2 soltanolkotabi:1 funded:1 lowered:1 resistant:1 operating:1 base:9 closest:2 perspective:2 optimizing:4 constantine:1 forcing:1 certain:1 arbitrarily:3 minimum:8 ministry:1 converge:2 ii:4 multiple:19 desirable:2 match:1 bvi:1 believed:1 cross:1 icdm:1 controlled:1 variant:1 regression:11 essentially:2 expectation:2 metric:1 vision:1 arxiv:2 iteration:7 sometimes:1 represent:1 kernel:2 chandra:1 receive:3 want:1 separately:2 interval:1 source:1 appropriately:1 sch:1 tend:4 shie:3 inconsistent:1 effectiveness:1 call:2 ee:1 near:1 presence:1 split:1 fit:1 associating:2 perfectly:1 reduce:1 pca:2 penalty:4 resistance:1 york:4 cause:2 ignored:3 useful:1 clear:1 detailed:1 rousseeuw:1 concentrated:1 maronna:1 rw:3 percentage:1 andr:1 singapore:3 notice:1 per:2 bulk:1 broadly:1 shall:1 group:4 key:1 terminology:1 costing:1 prevent:1 neither:1 penalizing:2 hartigan:1 lowering:1 imaging:1 merely:1 fraction:3 sum:2 inverse:1 mk2:2 almost:3 throughout:1 extends:1 family:1 prefer:1 appendix:5 scaling:1 entirely:1 bound:16 guaranteed:1 quadratic:3 annual:2 leroy:1 occur:1 infinity:7 constraint:1 bousquet:1 generates:2 argument:1 min:4 rabani:1 martin:1 department:3 according:2 alternate:1 ball:2 smaller:1 slightly:1 son:3 em:1 wi:1 making:1 outlier:30 explained:7 tier:1 computationally:1 abbreviated:1 discus:1 skew:1 needed:1 mml:3 letting:1 flip:1 singer:1 adopted:1 available:1 generalizes:1 permit:1 vidal:1 away:1 generic:1 appropriate:2 spectral:1 robustness:14 encounter:1 swamy:1 assumes:1 clustering:40 remaining:1 top:1 giving:1 establish:1 classical:1 society:4 objective:1 arrangement:1 realized:1 concentration:1 enhances:1 gradient:4 subspace:13 distance:8 fabio:1 manifold:1 considers:1 trivial:2 code:1 illustration:1 balance:1 minimizing:2 difficult:4 unfortunately:1 potentially:1 subproblems:1 negative:2 unknown:2 perform:1 allowing:4 upper:1 vertical:1 observation:2 datasets:1 walcom:1 benchmark:1 finite:2 enabling:1 supporting:1 regularizes:1 situation:1 extended:1 rn:2 varied:1 sharp:1 bk:1 pair:1 mechanical:1 required:1 learned:3 nu:2 address:1 below:1 pattern:1 regime:1 challenge:1 encompasses:1 including:3 max:1 memory:2 royal:2 natural:2 force:1 regularized:9 warm:1 scheme:2 ready:1 roberto:1 review:1 sg:1 literature:1 schulman:1 geometric:2 determining:1 relative:1 yohai:1 loss:54 proportional:1 age:1 foundation:2 affine:4 consistent:1 rubin:1 thresholding:1 nmk:1 editor:1 share:1 changed:1 repeat:1 supported:1 free:1 jth:1 side:2 allow:2 institute:1 taking:1 absolute:2 mikhail:1 benefit:2 boundary:1 overcome:1 dimension:2 world:1 ignores:1 dale:1 ticular:1 avoided:2 transaction:4 obtains:1 ignore:1 dealing:1 global:1 overfitting:2 uai:1 assumed:1 xi:13 shwartz:1 iterative:1 tailed:4 learn:2 mj:11 robust:17 ku:1 ignoring:3 schuurmans:1 williamson:1 spread:1 main:4 linearly:1 motivation:1 noise:6 allowed:3 xu:3 fig:1 intel:1 simplices:1 wiley:3 sub:2 weighting:6 third:1 theorem:6 rk:1 bad:2 specific:2 covariate:1 list:2 r2:2 svm:1 dk:2 exists:3 quantization:1 ci:1 magnitude:1 illustrates:1 kx:2 nk:4 gap:2 chen:1 led:1 pcm:1 partially:1 vulnerable:2 applies:1 corresponds:1 goal:6 viewed:1 cheung:1 lipschitz:1 hard:3 fista:2 specifically:1 typical:2 principal:2 lemma:8 called:2 total:1 partly:2 experimental:1 e:1 highdimensional:1 unbalanced:1 regularizing:1 phenomenon:1 ex:1 |
4,529 | 5,099 | Regularized Spectral Clustering under the
Degree-Corrected Stochastic Blockmodel
Karl Rohe
Department of Statistics
University of Wisconsin-Madison
Madison, WI
[email protected]
Tai Qin
Department of Statistics
University of Wisconsin-Madison
Madison, WI
[email protected]
Abstract
Spectral clustering is a fast and popular algorithm for finding clusters in networks. Recently, Chaudhuri et al. [1] and Amini et al. [2] proposed inspired
variations on the algorithm that artificially inflate the node degrees for improved
statistical performance. The current paper extends the previous statistical estimation results to the more canonical spectral clustering algorithm in a way that
removes any assumption on the minimum degree and provides guidance on the
choice of the tuning parameter. Moreover, our results show how the ?star shape?
in the eigenvectors?a common feature of empirical networks?can be explained
by the Degree-Corrected Stochastic Blockmodel and the Extended Planted Partition model, two statistical models that allow for highly heterogeneous degrees.
Throughout, the paper characterizes and justifies several of the variations of the
spectral clustering algorithm in terms of these models.
1
Introduction
Our lives are embedded in networks?social, biological, communication, etc.? and many researchers
wish to analyze these networks to gain a deeper understanding of the underlying mechanisms. Some
types of underlying mechanisms generate communities (aka clusters or modularities) in the network.
As machine learners, our aim is not merely to devise algorithms for community detection, but also
to study the algorithm?s estimation properties, to understand if and when we can make justifiable inferences from the estimated communities to the underlying mechanisms. Spectral clustering is a fast
and popular technique for finding communities in networks. Several previous authors have studied
the estimation properties of spectral clustering under various statistical network models (McSherry
[3], Dasgupta et al. [4], Coja-Oghlan and Lanka [5], Ames and Vavasis [6], Rohe et al. [7], Sussman
et al. [8] and Chaudhuri et al. [1]). Recently, Chaudhuri et al. [1] and Amini et al. [2] proposed two
inspired ways of artificially inflating the node degrees in ways that provide statistical regularization
to spectral clustering.
This paper examines the statistical estimation performance of regularized spectral clustering under
the Degree-Corrected Stochastic Blockmodel (DC-SBM), an extension of the Stochastic Blockmodel (SBM) that allows for heterogeneous degrees (Holland and Leinhardt [9], Karrer and Newman [10]). The SBM and the DC-SBM are closely related to the planted partition model and the
extended planted partition model, respectively. We extend the previous results in the following ways:
(a) In contrast to previous studies, this paper studies the regularization step with a canonical version
of spectral clustering that uses k-means. The results do not require any assumptions on the minimum expected node degree; instead, there is a threshold demonstrating that higher degree nodes
are easier to cluster. This threshold is a function of the leverage scores that have proven essential
in other contexts, for both graph algorithms and network data analysis (see Mahoney [11] and references therein). These are the first results that relate leverage scores to the statistical performance
1
of spectral clustering. (b) This paper provides more guidance for data analytic issues than previous
approaches. First, the results suggest an appropriate range for the regularization parameter. Second, our analysis gives a (statistical) model-based explanation for the ?star-shaped? figure that often
appears in empirical eigenvectors. This demonstrates how projecting the rows of the eigenvector
matrix onto the unit sphere (an algorithmic step proposed by Ng et al. [12]) removes the ancillary
effects of heterogeneous degrees under the DC-SBM. Our results highlight when this step may be
unwise.
Preliminaries: Throughout, we study undirected and unweighted graphs or networks. Define a
graph as G(E, V ), where V = {v1 , v2 , . . . , vN } is the vertex or node set and E is the edge set. We
will refer to node vi as node i. E contains a pair (i, j) if there is an edge between node i and j. The
edge set can be represented by the adjacency matrix A ? {0, 1}n?n . Aij = Aji = 1 if (i, j) is in
the edge set and Aij = Aji = 0 otherwise. Define the diagonal matrix D and the normalized Graph
Laplacian L, both elements of RN ?N , in the following way:
X
Dii =
Aij ,
L = D?1/2 AD?1/2 .
j
The following notations will be used throughout the paper: ||?|| denotes the spectral norm, and ||?||F
denotes the Frobenius norm. For two sequence of variables {xN } and {yN }, we say xN = ?(yN )
if and only if yN /xN = o(1). ?(.,.) is the indicator function where ?x,y = 1 if x = y and ?x,y = 0
if x 6= y.
2
The Algorithm: Regularized Spectral Clustering (RSC)
For a sparse network with strong degree heterogeneity, standard spectral clustering often fails to
function properly (Amini et al. [2], Jin [13]). To account for this, Chaudhuri et al. [1] proposed the
regularized graph Laplacian that can be defined as
L? = D??1/2 AD??1/2 ? RN ?N
where D? = D + ? I for ? ? 0.
The spectral algorithm proposed and studied by Chaudhuri et al. [1] divides the nodes into two
random subsets and only uses the induced subgraph on one of those random subsets to compute
the spectral decomposition. In this paper, we will study the more traditional version of spectral
algorithm that uses the spectral decomposition on the entire matrix (Ng et al. [12]). Define the
regularized spectral clustering (RSC) algorithm as follows:
1. Given input adjacency matrix A, number of clusters K, and regularizer ? , calculate the
regularized graph Laplacian L? . (As discussed later, a good default for ? is the average
node degree.)
2. Find the eigenvectors X1 , ..., XK ? RN corresponding to the K largest eigenvalues of L? .
Form X = [X1 , ..., XK ] ? RN ?K by putting the eigenvectors into the columns.
3. Form the matrix X ? ? RN ?K from X by normalizing each of X?s rows to have
?
unit length. That is, project each row of X onto the unit sphere of RK (Xij
=
P 2 1/2
Xij /( j Xij ) ).
4. Treat each row of X ? as a point in RK , and run k-means with K clusters. This creates K
non-overlapping sets V1 , ..., VK whose union is V.
5. Output V1 , ..., VK . Node i is assigned to cluster r if the i?th row of X ? is assigned to Vr .
This paper will refer to ?standard spectral clustering? as the above algorithm with L replacing L? .
These spectral algorithms have two main steps: 1) find the principal eigenspace of the (regularized)
graph Laplacian; 2) determine the clusters in the low dimensional eigenspace. Later, we will study
RSC under the Degree-Corrected Stochastic Blockmodel and show rigorously how regularization
helps to maintain cluster information in step (a) and why normalizing the rows of X helps in step
(b). From now on, we use X? and X?? instead of X and X ? to emphasize that they are related to
L? . Let X?i and [X?? ]i denote the i?th row of X? and X?? .
The next section introduces the Degree-Corrected Stochastic Blockmodel and its matrix formulation.
2
3
The Degree-Corrected Stochastic Blockmodel (DC-SBM)
In the Stochastic Blockmodel (SBM), each node belongs to one of K blocks. Each edge corresponds
to an independent Bernoulli random variable where the probability of an edge between any two
nodes depends only on the block memberships of the two nodes (Holland and Leinhardt [9]). The
formal definition is as follows.
Definition 3.1. For a node set {1, 2, ..., N }, let z : {1, 2, ..., N } ? {1, 2, ..., K} partition the N
nodes into K blocks. So, zi equals the block membership for node i. Let B be a K ? K matrix
where Bab ? [0, 1] for all a, b. Then under the SBM, the probability of an edge between i and j is
Pij = Pji = Bzi zj for any i, j = 1, 2, ..., n. Given z, all edges are independent.
One limitation of the SBM is that it presumes all nodes within the same block have the same expected
degree. The Degree-Corrected Stochastic Blockmodel (DC-SBM) (Karrer and Newman [10]) is
a generalization of the SBM that adds an additional set of parameters (?i > 0 for each node i)
that control the node degrees. Let B be a K ? K matrix where Bab ? 0 for all a, b. Then the
probability of an edge between node i and node j is ?i ?j Bzi zj , where ?i ?j Bzi zj ? [0, 1] for any
i, j = 1, 2, ..., n. Parameters ?i are arbitrary to within a multiplicative constant that is absorbed into
B. To make it identifiable, Karrer and Newman [10]
P suggest imposing the constraint that, within
each block, the summation of ?i ?s is 1. That is, i ?i ?zi ,r = 1 for any block label r. Under
this constraint, B has explicit meaning: If s 6= t, Bst represents the expected number of links
between block s and block t and if s = t, Bst is twice the expected number of links within block s.
Throughout the paper, we assume that B is positive definite.
Under the DC-SBM, define A , EA. This matrix can be expressed as a product of the matrices,
A = ?ZBZ T ?,
where (1) ? ? RN ?N is a diagonal matrix whose ii?th element is ?i and (2) Z ? {0, 1}N ?K is the
membership matrix with Zit = 1 if and only if node i belongs to block t (i.e. zi = t).
3.1
Population Analysis
Under the DC-SBM, if the partition is identifiable, then one should be able to determine the partition
from A . This section shows that with the population adjacency matrix A and a proper regularizer
? , RSC perfectly reconstructs the block partition.
P
Define the diagonal matrix D to contain the expected node degrees, Dii =
j Aij and define
D? = D + ? I where ? ? 0 is the regularizer. Then, define the population graph Laplacian L and
the population version of regularized graph Laplacian L? , both elements of RN ?N , in the following
way:
L = D ?1/2 A D ?1/2 ,
L? = D??1/2 A D??1/2 .
P
Define DB ? RK?K as a diagonal matrix whose (s, s)?th element is [DB ]ss = t Bst . A couple
lines of algebra shows that [DB ]ss = Ws is the total expected degrees of nodes from block s and
that Dii = ?i [DB ]zi zi . Using these quantities, the next Lemma gives an explicit form for L? as a
product of the parameter matrices.
Lemma 3.2. (Explicit form for L? ) Under the DC-SBM with K blocks with parameters {B, Z, ?},
define ?i? as:
?i2
Dii
?i? =
= ?i
.
?i + ? /Wzi
Dii + ?
?1/2
Let ?? ? Rn?n be a diagonal matrix whose ii?th entry is ?i? . Define BL = DB
L? can be written
1
1
?1
?1
L? = D? 2 A D? 2 = ??2 ZBL Z T ??2 .
?1/2
BDB
, then
Recall that A = ?ZBZ T ?. Lemma 3.2 demonstrates that L? has a similarly simple form that
separates the block-related information (BL ) and node specific information (?? ). Notice that if
1
1
1
1
? = 0, then ?0 = ? and L = D ? 2 A D ? 2 = ? 2 ZBL Z T ? 2 . The next lemma shows that L?
has rank K and describes how its eigen-decomposition can be expressed in terms of Z and ?.
3
Lemma 3.3. (Eigen-decomposition for L? ) Under the DC-SBM with K blocks and parameters
{B, Z, ?}, L? has K positive eigenvalues. The remaining N ? K eigenvalues are zero. Denote
the K positive eigenvalues of L? as ?1 ? ?2 ? ... ? ?K > 0 and let X? ? RN ?K contain the
eigenvector corresponding to ?i in its i?th column. Define X?? to be the row-normalized version of
X? , similar to X?? as defined in the RSC algorithm in Section 2. Then, there exists an orthogonal
matrix U ? RK?K depending on ? , such that
1
1. X? = ??2 Z(Z T ?? Z)?1/2 U
2. X?? = ZU , Zi 6= Zj ? Zi U 6= Zj U , where Zi denote the i?th row of the membership
matrix Z.
This lemma provides four useful facts about the matrices X? and X?? . First, if two nodes i and j
belong to the same block, then the corresponding rows of X? (denoted as X?i and X?j ) both point
??
in the same direction, but with different lengths: ||X?i ||2 = ( P ??i?z ,z )1/2 . Second, if two nodes
j
j
j
i
i and j belong to different blocks, then X?i and X?j are orthogonal to each other. Third, if zi = zj
then after projecting these points onto the sphere as in X?? , the rows are equal: [X?? ]i = [X?? ]j =
Uzi . Finally, if zi 6= zj , then the rows are perpendicular, [X?? ]i ? [X?? ]j . Figure 1 illustrates the
geometry of X? and X?? when there are three underlying blocks. Notice that running k-means on
the rows of X?? (in right panel of Figure 1) will return perfect clusters.
Note that if ? were the identity matrix, then the left panel in Figure 1 would look like the right panel
in Figure 1; without degree heterogeneity, there would be no star shape and no need for a projection
step. This suggests that the star shaped figure often observed in data analysis stems from the degree
heterogeneity in the network.
0.2
0.8
0.15
0.6
0.1
0.4
0.2
0.05
0
0
?0.2
?0.05
?0.4
?0.1
?0.6
0
0
?0.15
0.2
?0.8
0.5
?0.05
0.1
?0.1
0
?0.15
?0.1
?0.2
?0.5
0
?0.5
?0.2
?1
?1
Figure 1: In this numerical example, A comes from the DC-SBM with three blocks. Each point
corresponds to one row of the matrix X? (in left panel) or X?? (in right panel). The different colors
correspond to three different blocks. The hollow circle is the origin. Without normalization (left
panel), the nodes with same block membership share the same direction in the projected space.
After normalization (right panel), nodes with same block membership share the same position in
the projected space.
4
Regularized Spectral Clustering with the Degree Corrected model
This section bounds the mis-clustering rate of Regularized Spectral Clustering under the DC-SBM.
The section proceeds as follows: Theorem 4.1 shows that L? is close to L? . Theorem 4.2 shows
that X? is close to X? and that X?? is close to X?? . Finally, Theorem 4.4 shows that the output from
RSC with L? is close to the true partition in the DC-SBM (using Lemma 3.3).
Theorem 4.1. (Concentration of the regularized Graph Laplacian) Let G be a random graph, with
independent edges and pr(vi ? vj ) = pij . Let ? be the minimum expected degree of G, that is
? = mini Dii . For any > 0, if ? + ? > 3 ln N + 3 ln(4/), then with probability at least 1 ? ,
r
3 ln(4N/)
||L? ? L? || ? 4
.
(1)
?+?
4
Remark: This theorem builds on the results of Chung and Radcliffe [14] and Chaudhuri et al. [1]
which give a seemingly similar bound on ||L ? L || and ||D??1 A ? D??1 A ||. However, the previous
papers require that ? ? c ln N , where c is some constant. This assumption is not satisfied in a large
proportion of sparse empirical networks with heterogeneous degrees. In fact, the regularized graph
Laplacian is most interesting when this condition fails, i.e. when there are several nodes with very
low degrees. Theorem 4.1 only assumes that ? + ? > 3 ln N + 3 ln(4/). This is the fundamental
reason that RSC works for networks containing some nodes with extremely small degrees. It shows
that, by introducing a proper regularizer ? , ||L? ? L? || can be well bounded, even with ? very small.
Later we will show that a suitable choice of ? is the average degree.
The next theorem bounds the difference between the empirical and population eigenvectors (and
their row normalized versions) in terms of the Frobenius norm.
Theorem 4.2. Let A be the adjacency matrix generated from the DC-SBM with K blocks and parameters {B, Z, ?}. Let ?1 ? ?2 ? ... ? ?K > 0 be the only K positive eigenvalues of L? .
Let X? and X? ? RN ?K contain the top K eigenvectors of L? and L? respectively. Define
m = mini {min{||X?i ||2 , ||X?i ||2 }} as the length of the shortest row in X? and X? . Let X?? and
X?? ? RN ?K be the row normalized versions of X? and X? , as defined in step 3 of the RSC
algorithm.
For any > 0 and sufficiently large N , assume that
r
K ln(4N/)
1
(a)
(b) ? + ? > 3 ln N + 3 ln(4/),
? ? ?K ,
?+?
8 3
then with probability at least 1 ? , the following holds,
r
r
K ln(4N/)
K ln(4N/)
1
1
?
?
||X? ? X? O||F ? c0
, and ||X? ? X? O||F ? c0
. (2)
?K
?+?
m?K
?+?
The proof of Theorem 4.2 can be found in the supplementary materials.
Next we use Theorem 4.2 to derive a bound on the mis-clustering rate of RSC. To define ?misclustered?, recall that RSC applies the k-means algorithm to the rows of X?? , where each row is a
point in RK . Each row is assigned to one cluster, and each of these clusters has a centroid from
k-means. Define C1 , . . . , Cn ? RK such that Ci is the centroid corresponding to the i?th row of
X?? . Similarly, run k-means on the rows of the population eigenvector matrix X?? and define the
population centroids C1 , . . . , Cn ? RK . In essence, we consider node i correctly clustered if Ci is
closer to Ci than it is to any other Cj for all j with Zj 6= Zi .
The definition is complicated by the fact that, if any of the ?1 , . . . , ?K are equal, then only the
subspace spanned by their eigenvectors is identifiable. Similarly, if any of those eigenvalues are
close together, then the estimation results for the individual eigenvectors are much worse that for the
estimation results for the subspace that they span. Because clustering only requires estimation of the
correct subspace, our definition of correctly clustered is amended with the rotation O T ? RK?K ,
the matrix which minimizes kX?? O T ? X?? kF . This is referred to as the orthogonal Procrustes
problem and [15] shows how the singular value decomposition gives the solution.
Definition 4.3. If Ci O T is closer to Ci than it is to any other Cj for j with Zj 6= Zi , then we say
that node i is correctly clustered. Define the set of mis-clustered nodes:
M = {i : ?j 6= i, s.t.||Ci O T ? Ci ||2 > ||Ci O T ? Cj ||2 }.
(3)
The next theorem bounds the mis-clustering rate |M |/N .
Theorem 4.4. (Main Theorem) Suppose A ? RN ?N is an adjacency matrix of a graph G generated from the DC-SBM with K blocks and parameters {B, Z, ?}. Let ?1 ? ?2 ? ... ? ?K > 0
be the K positive eigenvalues of L? . Define M , the set of mis-clustered nodes, as in Definition 4.3.
Let ? be the minimum expected degree of G. For any > 0 and sufficiently large N , assume (a)
and (b) as in Theorem 4.2. Then with probability at least 1 ? , the mis-clustering rate of RSC with
regularization constant ? is bounded,
|M |/N ? c1
K ln(N/)
.
N m2 (? + ? )?2K
5
(4)
Remark 1 (Choice of ? ): The quality of the bound in Theorem 4.4 depends on ? through three
terms: (? + ? ), ?K , and m. Setting ? equal to the average node degree balances these terms. In
essence, if ? is too small, there is insufficient regularization. Specifically, if the minimum expected
degree ? = O(ln N ), then we need ? ? c() ln N to have enough regularization to satisfy condition
(b) on ? + ? . Alternatively, if ? is too large, it washes out significant eigenvalues.
To see that ? should not be too large, note that
C = (Z T ?? Z)1/2 BL (Z T ?? Z)1/2 ? RK?K
(5)
has the same eigenvalues as the largest K eigenvalues of L? (see supplementary materials for de?
tails). The matrix Z T ?? Z is diagonal and
P the (s, s)?th element is the summation of ?i within block
s. If EM = ?(N ln N ) where M = i Dii is the sum of the node degrees, then ? = ?(M/N )
sends the smallest diagonal entry of Z T ?? Z to 0, sending ?K , the smallest eigenvalue of C, to zero.
EM
The trade-off between these two suggests that a proper range of ? is (? EM
N , ? N ), where 0 < ? < ?
are two constants. Keeping ? within this range guarantees that ?K is lower bounded by some
constant depending only on K. In simulations, we find that ? = M/N (i.e. the average node
degree) provides good results. The theoretical results only suggest that this is the correct rate. So,
one could adjust this by a multiplicative constant. Our simulations suggest that the results are not
sensitive to such adjustments.
Remark 2 (Thresholding m): Mahoney [11] (and references therein) shows how the leverage
scores of A and L are informative for both data analysis and algorithmic stability. For L, the leverage
score of node i is ||X i ||22 , the length of the ith row of the matrix containing the top K eigenvectors.
Theorem 4.4 is the first result that explicitly relates the leverage scores to the statistical performance
of spectral clustering. Recall that m2 is the minimum of the squared row lengths in X? and X? ,
that is the minimum leverage score in both L? and L? . This appears in the denominator of (4). The
??
leverage scores in L? have an explicit form ||X?i ||22 = P ??i?z ,z . So, if node i has small expected
j
j
j
i
degree, then ?i? is small, rendering ||X?i ||2 small. This can deteriorate the bound in Theorem 4.4.
The problem arises from projecting X?i onto the unit sphere for a node i with small leverage; it
amplifies a noisy measurement. Motivated by this intuition, the next corollary focuses on the high
leverage nodes. More specifically, let m? denote the threshold. Define S to be a subset of nodes
whose leverage scores in L? and X? , ||X?i || and ||X?i || exceed the threshold m? :
S = {i : ||X?i || ? m? , ||X?i || ? m? }.
Then by applying k-means on the set of vectors {[X?? ]i , i ? S}, we cluster these nodes. The
following corollary bounds the mis-clustering rate on S.
Corollary 4.5. Let N1 = |S| denote the number of nodes in S and define M1 = M ? S as the set of
mis-clustered nodes restricted in S. With
? the same settings and assumptions as in Theorem 4.4, let
? > 0 be a constant and set m? = ?/ N . If N/N1 = O(1), then by applying k-means on the set of
vectors {[X?? ]i , i ? S}, we have with probability at least 1 ? , there exist constant c2 independent
of , such that
K ln(N1 /)
|M1 |/N1 ? c2 2
.
(6)
? (? + ? )?2K
In the main theorem (Theorem 4.4), the denominator of the upper bound contains m2 . Since we do
not make a minimum degree assumption, this value potentially approaches zero, making the bound
useless. Corollary 4.5 replaces N m2 with the constant ? 2 , providing a superior bound when there
are several small leverage scores.
If ?K (the Kth largest eigenvalue of L? ) is bounded below by some constant and ? = ?(ln N ),
then Corollary 4.5 implies that |M1 |/N1 = op (1). The above thresholding procedure only clusters
the nodes in S. To cluster all of the nodes, define the thresholded RSC (t-RSC) as follows:
(a) Follow step (1), (2), and (3) of RSC as in section 2.
?
(b) Apply k-means with K clusters on the set S = {i, ||X?i ||2 ? ?/ N } and assign each of
them to one of V1 , ..., VK . Let C1 , ..., CK denote the K centroids given by k-means.
(c) For each node i ?
/ S, find the centroid Cs such that ||[X?? ]i ? Cs ||2 = min1?t?K ||[X?? ]i ?
Ct ||2 . Assign node i to Vs . Output V1 , ...VK .
6
Remark 3 (Applying to SC): Theorem 4.4 can be easily applied to the standard SC algorithm under
both the SBM and the DC-SBM by setting ? = 0. In this setting, Theorem 4.4 improves upon the
previous results for spectral clustering.
Define the four parameter Stochastic Blockmodel SBM (p, r, s, K) as follows: p is the probability
of an edge occurring between two nodes from the same block, r is the probability of an out-block
linkage, s is the number of nodes within each block, and K is the number of blocks.
Because the SBM lacks degree heterogeneity within blocks, the rows of X within the same block
already share the same length. So, it is not necessary to project X i ?s to the unit sphere. Under the
four parameter model, ?K = (K[r/(p ? r)] + 1)?1 (Rohe et al. [7]). Using Theorem 4.4, with p
and r fixed and p > r, and applying k-means to the rows of X, we have
2
K ln N
|M |/N = Op
.
(7)
N
q
If K = o( lnNN ), then |M |/N ? 0 in probability. This improves the previous results that required
K = o(N 1/3 ) (Rohe et al. [7]). Moreover, it makes the results for spectral clustering comparable to
the results for the MLE in Choi et al. [16].
5
Simulation and Analysis of Political Blogs
This section compares five different methods of spectral clustering. Experiment 1 generates networks from the DC-SBM with a power-law degree distribution. Experiment 2 generates networks
from the standard SBM. Finally, the benefits of regularization are illustrated on an empirical network
from the political blogosphere during the 2004 presidential election (Adamic and Glance [17]).
The simulations compare (1) standard spectral clustering (SC), (2) RSC as defined in section 2, (3)
RSC without projecting X? onto unit sphere (RSC wp), (4) regularized SC with thresholding (tRSC), and (5) spectral clustering with perturbation (SCP) (Amini et al. [2]) which applies SC to the
perturbed adjacency matrix Aper = A + a11T . In addition, experiment 2 compares the performance
of RSC on the subset of nodes with high leverage scores (RSC on S) with the other 5 methods. We
set ? = M/N , threshold parameter ? = 1, and a = M/N 2 except otherwise specified.
Experiment 1. This experiment examines how degree heterogeneity affects the performance of the
spectral clustering algorithms. The ? parameters (from the DC-SBM) are drawn from the power law
distribution with lower bound xmin = 1 and shape parameter ? ? {2, 2.25, 2.5, 2.75, 3, 3.25, 3.5}.
A smaller ? indicates to greater degree heterogeneity. For each fixed ?, thirty networks are sampled.
In each sample, K = 3 and each block contains 300 nodes (N = 900). Define the signal to noise
ratio to be the expected number of in-block edges divided by the expected number of out-block
edges. Throughout the simulations, the SNR is set to three and the expected average degree is set to
eight.
The left panel of Figure 2 plots ? against the misclustering rate for SC, RSC, RSC wp, t-RSC, SCP
and RSC on S. Each point is the average of 30 sampled networks. Each line represents one method.
If a method assigns more than 95% of the nodes into one block, then we consider all nodes to be
misclustered. The experiment shows that (1) if the degrees are more heterogeneous (? ? 3.5),
then regularization improves the performance of the algorithms; (2) if ? < 3, then RSC and tRSC outperform RSC wp and SCP, verifying that the normalization step helps when the degrees are
highly heterogeneous; and, finally, (3) uniformly across the setting of ?, it is easier to cluster nodes
with high leverage scores.
Experiment 2. This experiment compares SC, RSC, RSC wp, t-RSC and SCP under the SBM with
no degree heterogeneity. Each simulation has K = 3 blocks and N = 1500 nodes. As in the
previous experiment, SNR is set to three. In this experiment, the average degree has three different
settings: 10, 21, 30. For each setting, the results are averaged over 50 samples of the network.
The right panel of Figure 2 shows the misclustering rate of SC and RSC for the three different
values of the average degree. SCP, RSC wp, t-RSC perform similarly to RSC, demonstrating that
under the standard SBM (i.e. without degree heterogeneity) all spectral clustering methods perform
comparably. The one exception is that under the sparsest model, SC is less stable than the other
methods.
7
0.5
SC
RSC
0.3
?
0.2
0.6
0.4
?
?
?
0.4
SC
RSC
RSC_wp
t?RSC
SCP
RSC on S
?
0.2
mis?clustering rate
0.8
?
mis?clustering rate
?
?
0.1
1.0
?
?
?
0.0
0.0
?
2.0
2.5
3.0
3.5
10
beta
15
20
25
30
expected average degree
Figure 2: Left Panel: Comparison of Performance for SC, RSC, RSC wp, t-RSC, SCP and (RSC
on S) under different degree heterogeneity. Smaller ? corresponds to greater degree heterogeneity.
Right Panel: Comparison of Performance for SC and RSC under SBM with different sparsity.
Analysis of Blog Network. This empirical network is comprised of political blogs during the 2004
US presidential election (Adamic and Glance [17]). Each blog has a known label as liberal or
conservative. As in Karrer and Newman [10], we symmetrize the network and consider only the
largest connected component of 1222 nodes. The average degree of the network is roughly 15. We
apply RSC to the data set with ? ranging from 0 to 30. In the case where ? = 0, it is standard
Spectral Clustering. SC assigns 1144 out of 1222 nodes to the same block, failing to detect the
ideological partition. RSC detects the partition, and its performance is insensitive to the ? . With
? ? [1, 30], RSC misclusters (80 ? 2) nodes out of 1222.
If RSC is applied to the 90% of nodes with the largest leverage scores (i.e. excluding the nodes
with the smallest leverage scores), then the misclustering rate among these high leverage nodes is
44/1100, which is almost 50% lower. This illustrates how the leverage score corresponding to a
node can gauge the strength of the clustering evidence for that node relative to the other nodes.
We tried to compare these results to the regularized algorithm in [1]. However, because there are
several very small degree nodes in this data, the values computed in step 4 of the algorithm in [1]
sometimes take negative values. Then, step 5 (b) cannot be performed.
6
Discussion
In this paper, we give theoretical, simulation, and empirical results that demonstrate how a simple
adjustment to the standard spectral clustering algorithm can give dramatically better results for networks with heterogeneous degrees. Our theoretical results add to the current results by studying the
regularization step in a more canonical version of the spectral clustering algorithm. Moreover, our
main results require no assumptions on the minimum node degree. This is crucial because it allows
us to study situations where several nodes have small leverage scores; in these situations, regularization is most beneficial. Finally, our results demonstrate that choosing a tuning parameter close to
the average degree provides a balance between several competing objectives.
Acknowledgements
Thanks to Sara Fernandes-Taylor for helpful comments. Research of TQ is supported by NSF Grant
DMS-0906818 and NIH Grant EY09946. Research of KR is supported by grants from WARF and
NSF grant DMS-1309998.
8
References
[1] K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees
in the extended planted partition model. Journal of Machine Learning Research, pages 1?23,
2012.
[2] Arash A Amini, Aiyou Chen, Peter J Bickel, and Elizaveta Levina. Pseudo-likelihood methods
for community detection in large sparse networks. 2012.
[3] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science,
2001. Proceedings. 42nd IEEE Symposium on, pages 529?537. IEEE, 2001.
[4] Anirban Dasgupta, John E Hopcroft, and Frank McSherry. Spectral analysis of random graphs
with skewed degree distributions. In Foundations of Computer Science, 2004. Proceedings.
45th Annual IEEE Symposium on, pages 602?610. IEEE, 2004.
[5] Amin Coja-Oghlan and Andr?e Lanka. Finding planted partitions in random graphs with general
degree distributions. SIAM Journal on Discrete Mathematics, 23(4):1682?1714, 2009.
[6] Brendan PW Ames and Stephen A Vavasis. Convex optimization for the planted k-disjointclique problem. arXiv preprint arXiv:1008.2814, 2010.
[7] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic
blockmodel. The Annals of Statistics, 39(4):1878?1915, 2011.
[8] D.L. Sussman, M. Tang, D.E. Fishkind, and C.E. Priebe. A consistent adjacency spectral
embedding for stochastic blockmodel graphs. Journal of the American Statistical Association,
107(499):1119?1128, 2012.
[9] P.W. Holland and S. Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):
109?137, 1983.
[10] Brian Karrer and Mark EJ Newman. Stochastic blockmodels and community structure in
networks. Physical Review E, 83(1):016107, 2011.
[11] Michael W Mahoney. Randomized algorithms for matrices and data. Advances in Machine
Learning and Data Mining for Astronomy, CRC Press, Taylor & Francis Group, Eds.: Michael
J. Way, Jeffrey D. Scargle, Kamal M. Ali, Ashok N. Srivastava, p. 647-672, 1:647?672, 2012.
[12] Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an
algorithm. Advances in neural information processing systems, 2:849?856, 2002.
[13] Jiashun Jin. Fast network community detection by score. arXiv preprint arXiv:1211.5803,
2012.
[14] Fan Chung and Mary Radcliffe. On the spectra of general random graphs. the electronic
journal of combinatorics, 18(P215):1, 2011.
[15] Peter H Sch?onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1?10, 1966.
[16] D.S. Choi, P.J. Wolfe, and E.M. Airoldi. Stochastic blockmodels with a growing number of
classes. Biometrika, 99(2):273?284, 2012.
[17] Lada A Adamic and Natalie Glance. The political blogosphere and the 2004 us election:
divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages
36?43. ACM, 2005.
9
| 5099 |@word version:7 pw:1 norm:3 proportion:1 nd:1 c0:2 simulation:7 tried:1 decomposition:5 zbl:2 contains:3 score:16 current:2 written:1 john:1 numerical:1 partition:12 informative:1 shape:3 analytic:1 remove:2 plot:1 v:1 xk:2 ith:1 jiashun:1 provides:5 node:71 ames:2 liberal:1 five:1 c2:2 beta:1 symposium:2 natalie:1 deteriorate:1 expected:14 roughly:1 growing:1 inspired:2 detects:1 election:3 psychometrika:1 project:2 moreover:3 underlying:4 notation:1 panel:11 eigenspace:2 bounded:4 minimizes:1 eigenvector:3 finding:3 inflating:1 astronomy:1 guarantee:1 pseudo:1 biometrika:1 demonstrates:2 partitioning:1 control:1 unit:6 bst:3 grant:4 yn:3 positive:5 treat:1 sussman:2 therein:2 studied:2 twice:1 suggests:2 sara:1 range:3 perpendicular:1 averaged:1 thirty:1 union:1 block:38 definite:1 procedure:1 aji:2 empirical:7 projection:1 suggest:4 onto:5 close:6 cannot:1 context:1 applying:4 convex:1 assigns:2 m2:4 examines:2 sbm:30 spanned:1 population:7 stability:1 embedding:1 variation:2 annals:1 suppose:1 us:3 origin:1 element:5 wolfe:1 modularities:1 observed:1 min1:1 preprint:2 verifying:1 zbz:2 calculate:1 connected:1 trade:1 lanka:2 xmin:1 intuition:1 rigorously:1 algebra:1 ali:1 creates:1 upon:1 learner:1 easily:1 hopcroft:1 various:1 represented:1 regularizer:4 fast:3 sc:14 newman:5 choosing:1 whose:5 supplementary:2 say:2 s:2 otherwise:2 presidential:2 statistic:3 noisy:1 seemingly:1 sequence:1 eigenvalue:12 leinhardt:3 product:2 qin:2 wzi:1 subgraph:1 chaudhuri:7 amin:1 frobenius:2 amplifies:1 cluster:16 misclustered:2 perfect:1 help:3 depending:2 derive:1 andrew:1 misclustering:3 stat:2 op:2 inflate:1 zit:1 strong:1 c:2 come:1 implies:1 direction:2 closely:1 correct:2 stochastic:15 arash:1 ancillary:1 dii:7 material:2 adjacency:7 crc:1 require:3 assign:2 generalization:1 clustered:6 preliminary:1 biological:1 brian:1 summation:2 extension:1 hold:1 sufficiently:2 algorithmic:2 bickel:1 smallest:3 failing:1 estimation:7 label:2 sensitive:1 largest:5 gauge:1 aim:1 ck:1 ej:1 aiyou:1 corollary:5 focus:1 properly:1 vk:4 bernoulli:1 rank:1 indicates:1 aka:1 likelihood:1 contrast:1 blockmodel:12 centroid:5 political:4 detect:1 brendan:1 helpful:1 inference:1 membership:6 entire:1 w:1 issue:1 among:1 denoted:1 equal:4 shaped:2 ng:3 bab:2 represents:2 look:1 yu:1 kamal:1 individual:1 geometry:1 jeffrey:1 maintain:1 n1:5 tq:1 detection:3 highly:2 mining:1 adjust:1 mahoney:3 introduces:1 mcsherry:3 edge:13 closer:2 necessary:1 orthogonal:4 divide:1 taylor:2 circle:1 guidance:2 theoretical:3 rsc:45 column:2 karrer:5 introducing:1 vertex:1 subset:4 entry:2 snr:2 comprised:1 too:3 perturbed:1 thanks:1 fundamental:1 siam:1 randomized:1 international:1 off:1 michael:3 together:1 squared:1 satisfied:1 reconstructs:1 containing:2 worse:1 american:1 chung:3 presumes:1 return:1 account:1 de:1 star:4 satisfy:1 combinatorics:1 explicitly:1 vi:2 radcliffe:2 ad:2 depends:2 later:3 multiplicative:2 performed:1 analyze:1 characterizes:1 francis:1 complicated:1 correspond:1 comparably:1 lada:1 researcher:1 justifiable:1 ed:1 definition:6 against:1 bzi:3 dm:2 proof:1 mi:10 couple:1 gain:1 sampled:2 popular:2 recall:3 color:1 improves:3 cj:3 oghlan:2 ea:1 appears:2 higher:1 follow:1 improved:1 wei:1 formulation:1 tsiatas:1 adamic:3 replacing:1 overlapping:1 lack:1 glance:3 quality:1 mary:1 effect:1 contain:3 true:1 normalized:4 regularization:11 assigned:3 wp:6 i2:1 illustrated:1 during:2 skewed:1 essence:2 generalized:1 bdb:1 demonstrate:2 meaning:1 ranging:1 recently:2 nih:1 common:1 rotation:1 superior:1 physical:1 insensitive:1 extend:1 discussed:1 belong:2 tail:1 m1:3 association:1 refer:2 significant:1 measurement:1 imposing:1 tuning:2 rd:1 mathematics:1 similarly:4 stable:1 etc:1 add:2 belongs:2 blog:5 life:1 devise:1 minimum:9 additional:1 greater:2 ashok:1 determine:2 shortest:1 signal:1 ii:2 relates:1 stephen:1 stem:1 levina:1 unwise:1 sphere:6 divided:2 mle:1 laplacian:8 heterogeneous:7 denominator:2 arxiv:4 normalization:3 sometimes:1 c1:4 addition:1 singular:1 sends:1 crucial:1 sch:1 comment:1 induced:1 ey09946:1 undirected:1 db:5 jordan:1 leverage:18 exceed:1 enough:1 rendering:1 affect:1 zi:12 perfectly:1 competing:1 cn:2 motivated:1 linkage:1 peter:2 remark:4 dramatically:1 useful:1 eigenvectors:9 procrustes:2 generate:1 vavasis:2 outperform:1 xij:3 exist:1 canonical:3 zj:9 notice:2 nsf:2 andr:1 estimated:1 correctly:3 discrete:1 dasgupta:2 group:1 putting:1 four:3 threshold:5 demonstrating:2 drawn:1 wisc:2 thresholded:1 v1:5 graph:19 merely:1 sum:1 run:2 extends:1 throughout:5 almost:1 electronic:1 vn:1 comparable:1 bound:12 ct:1 fan:1 replaces:1 identifiable:3 annual:1 strength:1 constraint:2 generates:2 extremely:1 min:1 span:1 department:2 anirban:1 describes:1 smaller:2 em:3 across:1 beneficial:1 wi:2 making:1 explained:1 projecting:4 pr:1 restricted:1 ln:18 tai:1 mechanism:3 sending:1 studying:1 apply:2 eight:1 v2:1 spectral:39 amini:5 appropriate:1 fernandes:1 yair:1 pji:1 eigen:2 denotes:2 clustering:39 remaining:1 running:1 assumes:1 top:2 madison:4 scargle:1 build:1 bl:3 objective:1 already:1 quantity:1 planted:6 concentration:1 diagonal:7 traditional:1 elizaveta:1 kth:1 subspace:3 a11t:1 link:3 separate:1 reason:1 length:6 useless:1 mini:2 insufficient:1 balance:2 providing:1 ratio:1 potentially:1 relate:1 frank:1 negative:1 priebe:1 proper:3 coja:2 perform:2 upper:1 jin:2 heterogeneity:10 extended:3 communication:1 excluding:1 situation:2 dc:17 rn:12 perturbation:1 arbitrary:1 community:7 pair:1 required:1 specified:1 ideological:1 able:1 proceeds:1 below:1 sparsity:1 explanation:1 power:2 suitable:1 regularized:14 indicator:1 review:1 understanding:1 acknowledgement:1 discovery:1 kf:1 relative:1 wisconsin:2 embedded:1 law:2 highlight:1 interesting:1 limitation:1 proven:1 foundation:2 degree:58 pij:2 consistent:1 thresholding:3 share:3 karl:1 row:26 supported:2 keeping:1 aij:4 formal:1 allow:1 deeper:1 understand:1 sparse:3 benefit:1 default:1 xn:3 unweighted:1 symmetrize:1 author:1 amended:1 projected:2 social:2 emphasize:1 alternatively:1 spectrum:1 why:1 artificially:2 vj:1 blockmodels:3 main:4 noise:1 x1:2 referred:1 vr:1 fails:2 position:1 wish:1 explicit:4 sparsest:1 third:1 tang:1 rk:9 theorem:23 choi:2 rohe:5 specific:1 zu:1 normalizing:2 evidence:1 essential:1 exists:1 workshop:1 kr:1 ci:8 airoldi:1 wash:1 justifies:1 illustrates:2 occurring:1 kx:1 chatterjee:1 chen:1 easier:2 absorbed:1 blogosphere:2 expressed:2 adjustment:2 holland:3 applies:2 corresponds:3 acm:1 identity:1 specifically:2 except:1 corrected:8 uniformly:1 principal:1 lemma:7 total:1 conservative:1 exception:1 scp:7 mark:1 arises:1 hollow:1 srivastava:1 |
4,530 | 51 | 495
REFLEXIVE ASSOCIATIVE MEMORIES
Hendrlcus G. Loos
Laguna Research Laboratory, Fallbrook, CA 92028-9765
ABSTRACT
In the synchronous discrete model, the average memory capacity of
bidirectional associative memories (BAMs) is compared with that of
Hopfield memories, by means of a calculat10n of the percentage of good
recall for 100 random BAMs of dimension 64x64, for different numbers
of stored vectors. The memory capac1ty Is found to be much smal1er than
the Kosko upper bound, which Is the lesser of the two dimensions of the
BAM. On the average, a 64x64 BAM has about 68 %of the capacity of the
corresponding Hopfield memory with the same number of neurons. Orthonormal coding of the BAM Increases the effective storage capaCity by
only 25 %. The memory capacity limitations are due to spurious stable
states, which arise In BAMs In much the same way as in Hopfleld
memories. Occurrence of spurious stable states can be avoided by
replacing the thresholding in the backlayer of the BAM by another
nonl1near process, here called "Dominant Label Selection" (DLS). The
simplest DLS is the wlnner-take-all net, which gives a fault-sensitive
memory. Fault tolerance can be improved by the use of an orthogonal or
unitary transformation. An optical application of the latter is a Fourier
transform, which is implemented simply by a lens.
INTRODUCT ION
A reflexive associative memory, also called bidirectional associative memory, is a two-layer neural net with bidirectional connections
between the layers. This architecture is implied by Dana Anderson's
optical resonator 1, and by similar configurations 2,3. Bart KoSk0 4 coined
the name "Bidirectional Associative Memory" (BAM), and Investigated
several basic propertles 4 - 6. We are here concerned with the memory
capac1ty of the BAM, with the relation between BAMs and Hopfleld
memories 7, and with certain variations on the BAM.
? American Institute of Physics 1988
496
BAM STRUCTURE
We will use the discrete model In which the state of a layer of
neurons Is described by a bipolar vector. The Dirac notationS will be
used, In which I> and <I denote respectively column and row vectors. <al
and la> are each other transposes, <alb> Is a scalar product, and la><bl is
an outer product. As depicted in Fig. 1, the BAM has two layers of
neurons, a front layer of Nneurons w tth state vector If>, and a back layer
back layer. P neurons
back
of P neurons with state vector
state vector b
stroke
Ib>. The bidirectional connectlons between the layers allow
signal flow In two directions.
frOnt1ay~r. 'N ~eurons forward
The front stroke gives Ib>=
state vector f
stroke
s(Blf?, where B 15 the connecFig. 1. BAM structure
tlon matrix, and s( ) Is a threshold function, operating at
zero. The back stroke results 1n an u~graded front state <f'I=s( <biB),
whIch also may be wr1tten as !r'>=s(B Ib> >. where the superscr1pt T
denotes transpos1t10n. We consider the synchronous model. where all
neurons of a layer are updated s1multaneously. but the front and back
layers are UPdated at d1fferent t1mes. The BAM act10n 1s shown 1n F1g. 2.
The forward stroke entalls takIng scalar products between a front
state vector If> and the rows or B, and enter1ng the thresholded results
as elements of the back state vector Ib>. In the back stroke we take
11
f
threshold ing
& reflection
lID
NxP
FIg. 2. BAM act 10n
v
threshold ing
& reflection
~ ~hreShOlding
4J
&
NxN
feedback
V
b
Ftg. 3. Autoassoc1at1ve
memory act10n
scalar products of Ib> w1th column vectors of B, and enter the
thresholded results as elements of an upgraded state vector 1('>. In
contrast, the act10n of an autoassoc1at1ve memory 1s shown 1n F1gure 3.
The BAM may also be described as an autoassoc1at1ve memory5 by
497
concatenating the front and back vectors tnto a s1ngle state vector
Iv>=lf,b>,and by taking the (N+P)x(N+P) connection matrtx as shown in F1g.
4. This autoassoclat1ve memory has the same number of neurons as our
f . b'----""
BAM, viz. N+P. The BAM operat1on where
----!'
initially only the front state 1s specif thresholding
zero [IDT
& feedback
f1ed may be obtained with the corresponding autoassoc1ative memory by
lID zero b
initially spectfying Ib> as zero, and by
Fig. 4. BAM as autoassoarranging the threshold1ng operat1on
ctative memory
such that s(O) does not alter the state
vector component. For a Hopfteld
memory7 the connection matrix 1s
M
(1)
H=( 1m> <mD -MI ,
m=l
I
where 1m>, m= 1 to M, are stored vectors, and I is the tdentity matr1x.
Writing the N+P d1mens1onal vectors 1m> as concatenations Idm,c m>, (1)
takes the form
H-(
I
M
(ldm><dml+lcm><cml+ldm><cml+lcm><dmD)-MI ,
m=l
(2)
w1th proper block plactng of submatr1ces understood. Writing
M
K= Llcm><dml ,
M
m=l
M
Hd=(Lldm><dmD-MI,
Hc=( L'lcm><cml>-MI,
m=l
m=l
(3)
(4)
where the I are identities in appropriate subspaces, the Hopfield matrix
H may be partitioned as shown in Fig. 5. K is just the BAM matrix given
by Kosko 5, and previously used by Kohonen 9 for linear heteroassoclatjve
memories. Comparison of Figs. 4 and 5 shows that in the synchronous
discrete model the BAM with connection matrix (3) is equivalent to a
Hopfield memory in which the diagonal blocks Hd and Hc have been
498
deleted. Since the Hopfleld memory is robust~ this "prun1ng" may not
affect much the associative recall of stored vectors~ if M is small;
however~ on the average~ pruning will not improve the memory capaclty.
It follows that, on the average~ a discrete synchronous BAM with matrix
(3) can at best have the capacity of a Hopfleld memory with the same
number of neurons.
We have performed computations of the average memory capacity
for 64x64 BAMs and for corresponding 128x 128 Hopfleld memories.
Monte Carlo calculations were done for 100 memories) each of which
stores M random bipolar vectors. The straight recall of all these vectors
was checked) al10wtng for 24 Iterations. For the BAMs) the iterations
were started with a forward stroke in which one of the stored vectors
Idm> was used as input. The percentage of good recall and its standard
deviation were calculated. The results plotted in Fig. 6 show that the
square BAM has about 68~ of the capacity of the corresponding Hopfleld
memory. Although the total number of neurons is the same) the BAM only
needs 1/4 of the number of connections of the Hopfield memory. The
storage capacity found Is much smaller than the Kosko 6 upper bound)
which Is min (N)P).
JR[=
10
Fig. 5. Partitioned
Hopfield matrix
20
30
40
50
60
M. number of stored vectors
Fig. 6.
~
of good recall versus M
CODED BAM
So far) we have considered both front and back states to be used for
data. There is another use of the BAM in which only front states are used
as data) and the back states are seen as providing a code) label, or
pOinter for the front state. Such use was antiCipated in our expression
(3) for the BAM matrix which stores data vectors Idm> and their labels or
codes lem>. For a square BAM. such an arrangement cuts the Information
contained in a single stored data vector jn half. However, the freedom of
499
choosing the labels fC m> may perhaps be put to good use. Part of the
problem of spurious stable states which plagues BAMs as well as
Hopf1eld memories as they are loaded up, is due to the lack of
orthogonality of the stored vectors. In the coded BAM we have the
opportunity to remove part of this problem by choosing the labels as
orthonorma1. Such labels have been used previously by Kohonen 9 1n linear
heteroassociative memories. The question whether memory capacity can
be Improved In this manner was explored by taking 64x64 BAt1s In which
the labels are chosen as Hadamard vectors. The latter are bipolar vectors
with Euclidean norm ,.fp, which form an orthonormal set. These vectors
are rows of a PxP Hadamard matrix; for a discussion see Harwtt and
Sloane 10. The storage capacity of such Hadamard-coded BAMs was
calculated as function of the number M of stored vectors for 100 cases
for each value of M, in the manner discussed before. The percentage of
good recall and its standard deviation are shown 1n Fig. 6. It Is seen that
the Hadamard coding gives about a factor 2.5 in M, compared to the
ordinary 64x64 BAM. However, the coded BAM has only half the stored
data vector dimension. Accounting for this factor 2 reduction of data
vector dimension, the effective storage capacity advantage obtained by
Hadamard coding comes to only 25 ~.
l
HALF BAt1 WITH HADAMARD CODING
For the coded BAM there is the option of deleting the threshold
operation In the front layer. The resulting architecture may be called
"half BAt1". In the half BAM, thresholding Is only done on the labels, and
consequently, the data may be taken as analog vectors. Although such an
arrangement diminishes the robustness of the memory somewhat, there
are applications of interest. We have calculated the percentage of good
recall for 100 cases, and found that giving up the data thresholding cuts
the storage capacity of the Hadamard-coded BAt1 by about 60 %.
SELECTIVE REFLEXIVE MEMORY
The memory capacity limitations shown in Fig. 6 are due to the
occurence of spurious states when the memories are loaded up.
Consider a discrete BAM with stored data vectors 1m>, m= 1 to M,
orthonormal labels Icm>, and the connection matrix
500
(5)
For an input data vector Iv> which is closest to the stored data vector
11 >, one has 1n the forward stroke
M
Ib>=s(clc 1>+
L amlcm?
(6)
,
m=2
where
c=< llv> ?
and
(7)
am=<mlv>
M
Although for m# 1 am<c, for some vector component the sum
L
amlc m>
m=2
may accumulate to such a large value as to affect the thresholded result
Ib>. The problem would be avoided jf the thresholding operation s( ) in the
back layer of the BAM were to be replaced by another nonl1near operation
which selects, from the I inear combination
M
clc 1>+
L amlcm>
(8)
m=2
the dominant label Ic 1>. The hypothetical device which performs this
operation is here called the "Dominant Label Selector" (DLS) 11, and we
call the resulting memory architecture "Selective Reflexive Memory"
(SRM). With the back state selected as the dominant label Ic 1>, the back
stroke gives <f'I=s( <c ,IK)=s(P< 1D=< 11, by the orthogonal ity of the labels
Icm>. It follows 11 that the SRM g1ves perfect assoc1attve recall of the
nearest stored data vector, for any number of vectors stored. Of course,
the llnear independence of the P-dimensionallabel vectors Icm>, m= 1 to
M, requires P>=M.
The DLS must select, from a linear combination of orthonormal
labels, the dominant label. A trivial case is obtained by choosing the
501
labels Icm>as basis vectors Ium>, which have all components zero except
for the mth component, which 1s unity. With this choice of labels, the
f
DLS may be taken as a winnertake-all net W, as shown in Fig. 7.
winner?
This case appears to be Included in
b take-all
net
Adapt Ive Resonance Theory
(ART) 12 as a special sjmpllf1ed
case. A relationship between
Flg.7. Simplest reflexive
the ordinary BAM and ART was
memory with DLS
pOinted out by KoskoS. As in ART,
there Is cons1derable fault sensitivity tn this memory, because the
stored data vectors appear in the connectton matrix as rows.
A memory with better fault tolerance may be obtained by using
orthogonal labels other than basis vectors. The DLS can then be taken as
an orthogonal transformation 6 followed by a winner-take-an net, as
shown 1n Fig. 8. 6 is to be chosen such that 1t transforms the labels Icm>
~
f
,.0 rthogonal
I
1
(G
i
1[
u
l
transformation
winner/' take-all
net
F1g. 8. Select1ve reflex1ve
memory
tnto vectors proportional to the
1
basts vectors um>. This can always
be done by tak1ng
p
(9)
G=[Iup><cpl ,
p=l
where the Icp>, p= 1 to P, form a
complete orthonormal set which
contains the labels Icm>, m=l to M. The neurons in the DLS serve as
grandmother cells. Once a single winning cell has been activated, I.e.,
the state of the layer Is a single basis vector, say lu I ) this vector
J
must be passed back, after appllcation of the transformation G- 1, such
as to produce the label IC1> at the back of the BAM. Since G 1s
orthogonal. we have 6- 1=6T, so that the reQu1red 1nverse
transformation may be accompl1shed sfmply by sending the bas1s vector
back through the transformer; this gives
P
(10)
<u 116=[<u 1IUp><cpl=<c 11
p=l
502
as required.
HAlF SRM
The SRM may be modified by deleting the thresholding operation in
the front layer. The front neurons then have a I inear output, which is
reflected back through the SRM, as shown in Fig. 9. In this case, the
stored data vectors and the
input data vectors may be taken
f I i near neurons
/
orthogonal
as analog vectors, but we re,1
transfor.1
Qu1re all the stored vectors to
mation
(G
~
have the same norm. The act i on
T
winnerof the SRM proceeds in the same
I '/' take-all
U
net
way as described above, except
that we now require the orthoFig. 9. Half SRM with l1near
normal labels to have unit
norm. It follows that, just l1ke
neurons in front layer
the full SRM, the half SRM gives
perfect associative recall to the nearest stored vector, for any number
of stored vectors up to the dimension P of the labels. The latter
condition 1s due to the fact that a P-dimensional vector space can at
most conta1n P orthonormal vectors.
In the SRM the output transform Gis 1ntroduced in order to improve
the fauJt tolerance of the connection matrix K. This is accomplished at
the cost of some fault sensitivity of G, the extent of which needs to be
investigated. In this regard 1t is noted that in certatn optical implementat ions of reflexive memories, such as Dana Anderson's resonator I and
Similar conflgurations 2,3, the transformation G is a Fourier transform,
which is implemented simply as a lens. Such an implementation ts quite
insentive to the common semiconductor damage mechanisms.
EQUIVALENT AUTOASSOCIATIVE MEMORIES
Concatenation of the front and back state vectors allows description of the SRMs tn terms of autoassociative memories. For the SRM
which uses basis vectors as labels the corresponding autoassociative
memory js shown tn Fjg. 10. This connect jon matrtx structure was also
proposed by Guest et. a1. 13. The wtnner-take-all net W needs to be
503
given t1me to settle on a basis
vector state before the state Ib>
slow thres / ' /'
holding &
can influence the front state If>.
feedback
b
f
This may perhaps be achieved by
zero I[T [~ ~
arranging the W network to have a
"I' f ast thres thresholding and feedback which
!r WI bl J h olding&
feedback
are fast compared with that of the
K network. An alternate method
Fig. 10. Equivalent automay be to equip the W network
associat lve memory
w1th an output gate which is
opened only after the W net has
sett led. These arrangements
present a compUcatlon and cause a delay, which in some appllcations
may be 1nappropriate, and In others may be acceptable in a trade
between speed and memory density.
For the SRM wtth output transformer and orthonormal1abels other
than basis vectors, a corresponf b , w ~eedback
ding autoassoclat1ve memory may
be composed as shown In Fig.l1.
f thresholded
(OJ [T (OJ
An output gate in the w layer is
b
linear
(OJ
(GT
I[
chosen as the device which
W thresholded
(Q) (G WI
prevents the backstroke through
+ output gate
the BAM to take place before the
w1nner-take-al net has settled.
Fig. 11. Autoassoc1at1ve memory
equivalent to SRM with transform
The same effect may perhaps be
achieved by choosing different
response times for the neuron
output gate
layers f and w. These matters
wr ~winner-take-all
require investigation. Unless
.......... Woutput
the output transform G 1s already
:t@
b back layer,
required for other reasons, as in
linear
some optical resonators, the DLS
'--_ _ _-' f front layer
II = BAM connections
with output transform is clumsy.
@ =orthogonal transformat i on
I t would far better to combine
W! ~ winner-take-all net
the transformer G and the net W
into a single network. To find
Fig. 12. Structure of SRM
such a DLS should be considered
a cha 11 enge.
""
504
The wort< was partly supported by the Defense Advanced Research
projects Agency, ARPA order -5916, through Contract DAAHOI-86-C
-0968 with the U.S. Army Missile Command.
REFERENCES
1. D. Z. Anderson, "Coherent optical eigenstate memory", Opt. Lett. 11,
56 (1986).
2. B. H. Soffer, G. J. Dunning, Y. Owechko, and E. Marom, "Associative
holographic memory with feedback using phase-conjugate mirrors", Opt.
Lett. II, 118 ( 1986).
3. A. Yarrtv and S. K. Wong, "Assoctat ive memories based on messagebearing optical modes In phase-conjugate resonators", Opt. Lett. 11,
186 (1986).
4. B. Kosko, "Adaptive Cognitive ProceSSing", NSF Workshop for Neural
Networks and Neuromorphlc Systems, Boston, Mass., Oct. &-8, 1986.
5. B. KOSKO, "Bidirectional Associative Memories", IEEE Trans. SMC, In
press, 1987.
6. B. KOSKO, "Adaptive Bidirectional Associative Memories", Appl. Opt.,
1n press, 1987.
7. J. J. Hopfleld, "Neural networks and physical systems with emergent
collective computational ablJ1tles", Proc. NatJ. Acad. Sct. USA 79, 2554
( 1982).
8. P. A. M. Dirac, THE PRINCI PLES OF QUANTLt1 MECHANICS, Oxford, 1958.
9. T. Kohonen, "Correlation Matrix Memories", HelsinsKi University of
Technology Report TKK-F-A 130, 1970.
10. M. Harwit and N. J. A Sloane, HADAMARD TRANSFORM OPTICS,
Academic Press, New York, 1979.
11. H. G. Loos, Adaptive Stochastic Content-Addressable Memory", Final
Report, ARPA Order 5916, Contract DAAHO 1-86-C-0968, March 1987.
12. G. A. Carpenter and S. Grossberg, "A Massively Parallel Architecture
for a Self-Organizing Neural Pattern Recognition Machine", Computer
Vision, Graphics, and Image processing, 37, 54 (1987).
13. R. D. TeKolste and C. C. Guest, "Optical Cohen-Grossberg System
with Ali-Optical FeedbaCK", IEEE First Annual International Conference
on Neural Networks, San Diego, June 21-24, 1987.
It
| 51 |@word norm:3 cha:1 cml:3 heteroassociative:1 accounting:1 thres:2 reduction:1 configuration:1 contains:1 must:2 remove:1 bart:1 half:8 selected:1 device:2 pointer:1 ik:1 soffer:1 combine:1 manner:2 hresholding:1 mechanic:1 project:1 notation:1 mass:1 transformation:6 hypothetical:1 act:2 bipolar:3 um:1 unit:1 appear:1 before:3 understood:1 laguna:1 semiconductor:1 acad:1 oxford:1 upgraded:1 cpl:2 appl:1 smc:1 grossberg:2 block:2 lf:1 addressable:1 selection:1 storage:5 put:1 transformer:3 writing:2 influence:1 ast:1 wong:1 equivalent:4 nneurons:1 orthonormal:6 wort:1 hd:2 ity:1 x64:5 variation:1 arranging:1 updated:2 diego:1 us:1 element:2 recognition:1 cut:2 ding:1 trade:1 agency:1 ali:1 serve:1 basis:6 hopfield:6 emergent:1 fast:1 effective:2 monte:1 inear:2 choosing:4 quite:1 ive:2 say:1 gi:1 transform:7 final:1 associative:10 advantage:1 net:12 product:4 kohonen:3 hadamard:8 organizing:1 description:1 dirac:2 produce:1 perfect:2 transformat:1 nearest:2 ic1:1 implemented:2 come:1 direction:1 opened:1 bib:1 stochastic:1 settle:1 require:2 investigation:1 opt:4 transfor:1 considered:2 ic:2 normal:1 diminishes:1 proc:1 label:24 sensitive:1 always:1 mation:1 modified:1 command:1 viz:1 june:1 contrast:1 am:2 initially:2 spurious:4 relation:1 mth:1 selective:2 selects:1 resonance:1 art:3 special:1 nverse:1 once:1 jon:1 anticipated:1 alter:1 dml:2 idt:1 others:1 report:2 composed:1 replaced:1 phase:2 freedom:1 interest:1 blf:1 activated:1 f1g:3 orthogonal:7 unless:1 ples:1 iv:2 euclidean:1 re:1 plotted:1 arpa:2 column:2 ordinary:2 cost:1 reflexive:6 deviation:2 srm:14 delay:1 holographic:1 front:17 loo:2 graphic:1 stored:18 connect:1 density:1 international:1 sensitivity:2 contract:2 physic:1 icp:1 settled:1 cognitive:1 american:1 coding:4 matter:1 performed:1 option:1 sct:1 parallel:1 pxp:1 square:2 loaded:2 implementat:1 lu:1 carlo:1 straight:1 stroke:9 checked:1 mi:4 eurons:1 recall:9 woutput:1 back:19 appears:1 bidirectional:7 marom:1 reflected:1 response:1 improved:2 done:3 anderson:3 just:2 tkk:1 correlation:1 replacing:1 lack:1 mode:1 perhaps:3 alb:1 name:1 effect:1 usa:1 mlv:1 laboratory:1 dunning:1 self:1 noted:1 sloane:2 complete:1 tn:3 performs:1 l1:1 reflection:2 image:1 common:1 srms:1 physical:1 cohen:1 winner:5 discussed:1 analog:2 iup:2 accumulate:1 enter:1 pointed:1 lcm:3 winnertake:1 stable:3 operating:1 gt:1 dominant:5 j:1 closest:1 massively:1 store:2 certain:1 fault:5 accomplished:1 seen:2 somewhat:1 ldm:2 signal:1 ii:2 bast:1 full:1 ing:2 adapt:1 calculation:1 academic:1 coded:6 a1:1 basic:1 vision:1 iteration:2 lve:1 achieved:2 ion:2 cell:2 specif:1 flow:1 call:1 unitary:1 near:1 concerned:1 affect:2 independence:1 architecture:4 lesser:1 synchronous:4 whether:1 expression:1 defense:1 passed:1 york:1 cause:1 autoassociative:3 transforms:1 simplest:2 tth:1 percentage:4 nsf:1 wr:1 discrete:5 threshold:4 deleted:1 thresholded:5 sum:1 place:1 acceptable:1 bound:2 layer:19 followed:1 annual:1 optic:1 orthogonality:1 fourier:2 speed:1 min:1 optical:8 missile:1 alternate:1 combination:2 wtth:1 march:1 conjugate:2 jr:1 smaller:1 unity:1 partitioned:2 wi:2 lid:2 lem:1 taken:4 previously:2 mechanism:1 sending:1 operation:5 appropriate:1 occurrence:1 robustness:1 gate:4 jn:1 denotes:1 opportunity:1 coined:1 giving:1 graded:1 bl:2 implied:1 arrangement:3 question:1 already:1 clc:2 damage:1 md:1 diagonal:1 subspace:1 capacity:13 concatenation:2 outer:1 extent:1 trivial:1 reason:1 idm:3 equip:1 code:2 relationship:1 providing:1 guest:2 holding:1 implementation:1 proper:1 collective:1 upper:2 kosko:6 neuron:14 t:1 required:2 connection:8 ium:1 plague:1 coherent:1 trans:1 proceeds:1 bam:44 pattern:1 fp:1 grandmother:1 hopfleld:7 oj:3 memory:57 deleting:2 advanced:1 improve:2 technology:1 started:1 occurence:1 nxn:1 nxp:1 limitation:2 proportional:1 dana:2 versus:1 thresholding:7 row:4 llv:1 course:1 supported:1 transpose:1 allow:1 institute:1 taking:3 tolerance:3 regard:1 feedback:7 dimension:5 calculated:3 lett:3 forward:4 adaptive:3 san:1 avoided:2 far:2 pruning:1 selector:1 robust:1 ca:1 investigated:2 hc:2 arise:1 associat:1 resonator:4 icm:6 carpenter:1 fig:17 eedback:1 clumsy:1 slow:1 concatenating:1 winning:1 ib:9 dmd:2 tlon:1 flg:1 explored:1 dl:10 workshop:1 mirror:1 boston:1 depicted:1 led:1 fc:1 simply:2 army:1 prevents:1 contained:1 scalar:3 oct:1 identity:1 consequently:1 jf:1 content:1 included:1 except:2 total:1 called:4 lens:2 partly:1 la:2 select:1 latter:3 |
4,531 | 510 | Threshold Network Learning in the Presence of
Equivalences
John Shawe-Taylor
Department of Computer Science
Royal Holloway and Bedford New College
University of London
Egham, Surrey TW20 OEX, UK
Abstract
This paper applies the theory of Probably Approximately Correct (PAC)
learning to multiple output feedforward threshold networks in which the
weights conform to certain equivalences. It is shown that the sample size
for reliable learning can be bounded above by a formula similar to that
required for single output networks with no equivalences. The best previously obtained bounds are improved for all cases.
1
INTRODUCTION
This paper develops the results of Baum and Haussler [3] bounding the sample sizes
required for reliable generalisation of a single output feedforward threshold network.
They prove their result using the theory of Probably Approximately Correct (PAC)
learning introduced by Valiant [11]. They show that for 0 < ?: :S 1/2, if a sample of
sIze
64W
64N
m 2:: rna = - - log - ?:
?:
is loaded into a feedforward network of linear threshold units with N nodes and W
weights, so that a fraction 1- ?:/2 of the examples are correctly classified, then with
confidence approaching certainty the network will correctly classify a fraction 1 - ?:
of future examples drawn according to the same distribution. A similar bound was
obtained for the case when the network correctly classified the whole sample. The
results below will imply a significant improvement to both of these bounds.
879
880
Shawe-Taylor
In many cases training can be simplified if known properties of a problem can
be incorporated into the structure of a network before training begins. One such
technique is described by Shawe-Taylor [9], though many similar techniques have
been applied as for example in TDNN's [6]. The effect of these restrictions is to
constrain groups of weights to take the same value and learning algorithms are
adapted to respect this constraint.
In this paper we consider the effect of this restriction on the generalisation performance of the networks and in particular the sample sizes required to obtain a
given level of generalisation. This extends the work described above by Baum and
Haussler [3] by improving their bounds and also improving the results of ShaweTaylor and Anthony [10], who consider generalisation of multiple-output threshold
networks. The remarkable fact is that in all cases the formula obtained is the same,
where we now understand the number of weights W to be the number of weight
classes, but N is still the number of computational nodes.
2
2.1
DEFINITIONS AND MAIN RESULTS
SYMMETRY AND EQUIVALENCE NETWORKS
We begin with a definition of threshold networks. To simplify the exposition it is
convenient to incorporate the threshold value into the set of weights. This is done
by creating a distinguished input that always has value 1 and is called the threshold
input. The following is a formal notation for these systems.
A network N = (C, I, 0, no, E) is specified by a set C of computational nodes, a
set I of input nodes, a subset 0 ~ C of output nodes and a node no E I, called the
threshold node. The connectivity is given by a set E ~ (C u 1) x C of connections,
with {no} x C ~ E.
With network N we associate a weight function W from the set of connections to
the real numbers. We say that the network N is in state w. For input vector i
with values in some subset of the set 'R of real numbers, the network computes a
function F./If(w, i).
An automorphism')' of a network N = (C, I, 0, no, E) is a bijection of the nodes of
N which fixes I setwise and {no} U 0 pointwise, such that the induced action fixes
E setwise. We say that an automorphism')' preserves the weight assignment W if
Wji = w(-yj)("Y i ) for all i E I u C, j E C. Let')' be an automorphism of a network
N = (C, 1,0, no, E) and let i be an input to N. We denote by i"Y the input whose
value on input k is that of i on input ,),-lk.
The following theorem is a natural generalisation of part of the Group Invariance
Theorem of Minsky and Pappert [8] to multi-layer perceptrons.
Theorem 2.1 [9J Let')' be a weight preserving automorphism of the network N =
( C, I, 0, no, E) in state w. Then for every input vector i
F./If(w, i) = F./If(w, P).
Following this theorem it is natural to consider the concept of a symmetry network [9]. This is a pair (N, r), where N is a network and r a group of weight
Threshold Network Learning in the Presence of Equivalences
preserving automorphims of N. We will also refer to the automorphisms as symmetries. For a symmetry network (N, r), we term the orbits of the connections E
under the action of r the weight classes.
Finally we introduce the concept of an equivalence network. This definition abstracts from the symmetry networks precisely those properties we require to obtain
our results. The class of equivalence networks is, however, far larger than that
of symmetry networks and includes many classes of networks studied by other researchers [6, 7].
Definition 2.2 An equivalence network is a threshold network in which an equivalence relation is dejined on both weights and nodes. The two relations are required
to be compatible in that weights in the same class are connected to nodes in the same
class, while nodes in the same class have the same set of input weight connection
types. The weights in an equivalence class are at all times required to remain equal.
Note that every threshold network can be viewed as an equivalence network by
taking the trivial equivalence relations. We now show that symmetry networks
are indeed equivalence networks with the same weight classes and give a further
technical lemma. For both lemmas proofs are omitted.
Lemma 2.3 A symmetry network (N, r) is an equivalence network, where the
equivalence classes are the orbits of connections and nodes respectively.
Lemma 2.4 Let N be an equivalence network and C be the set of classes of nodes.
1, . . . , n, such that nodes in Gi do
Then there is an indezing of the classes, Gi, i
not have connections from nodes in Gj for j 2 i.
=
2.2
MAIN RESULTS
We are now in a position to state our main results. Note that throughout this paper
log means natural logarithm, while an explicit subscript is used for other bases.
Theorem 2.5 Let N be an equivalence network with W weight classes and N computational nodes. If the network correctly computes a function on a set of m inputs
drawn independently according to a jized probability distribution, where
m
1 [ (1.3) +
2 mo(?,I5) = ?(1- J?) log 6
2Wlog
(6VN)
-?- 1
..
then with probability at least 1 - 15 the error rate of the network will be less than
on inputs drawn according to the same distribution.
?
Theorem 2.6 Let N be an equivalence network with W weight classes and N
computational nodes. If the network correctly computes a function on a fraction
1 - (1 -1')? of m inputs drawn independently according to a jized probability distribution, where
m 2 mo(?,c5,1')
= 1'2?(1 - 1.jfJN
?/ N)
[4 log (-154 )
+ 6Wlog ( l';~ ?)]
then with probability at least 1 - 15 the error rate of the network will be less than
on inputs drawn according to the same distribution.
?
881
882
Shawe-Taylor
3
3.1
THEORETICAL BACKGROUND
DEFINITIONS AND PREVIOUS RESULTS
In order to present results for binary outputs ({O, I} functions) and larger ranges
in a unified way we will consider throughout the task of learning the graph of a
function. All the definitions reduce to the standard ones when the outputs are
binary.
We consider learning from examples as selecting a suitable function from a set H of
hypotheses, being functions from a space X to set Y, which has at most countable
Slze. At all times we consider an (unknown) target function
c:X---+Y
which we are attempting to learn. To this end the space X is required to be a
probability space (X, lJ, p.), with appropriate regularity conditions so that the sets
considered are measurable [4]. In particular the hypotheses should be measurable
when Y is given the discrete topology as should the error sets defined below. The
space S = X x Y is equipped with au-algebra E x 2Y and measure v = v(p., e),
defined by its value on sets of the form U x {y}:
v(U x {y})
= p. (U n e- 1 (y)) .
Using this measure the error of a hypothesis is defined to be
erv (h)
= v{(:z:, y) E Slh(:z:) =1= y}.
The introduction of v allows us to consider samples being drawn from S, as they
will automatically reflect the output value of the target. This approach freely
generalises to stochastic concepts though we will restrict ourselves to target functions for the purposes of this paper. The error of a hypothesis h on a sample
x = ((:Z:1' yd, ... , (:Z:m, Ym)) E sm is defined to be
erx(h)
=~
l{ilh(:Z:i) =1= ydl?
m
We also define the VC dimension of a set of hypotheses by reference to the product
space S. Consider a sample x = ((:Z:1I yI), ... , (:Z:m' Ym)) E sm and the function
x* : H
=
---+
given by X*(h)i
1 if and only if h(:z:,)
the growth function BH(m) as
BH(m)
{O, l}m,
= Yi, for i =
1, ... ,m. We can now define
= XES"'
max l{x*(h)lh E H}I ~ 2m.
The Vapnik-Chervonenkis dimension of a hypothesis space H is defined as
if BH(m) = 2m
otherwise.
,
for all m;
In the case of a threshold network oN, the set of functions obtainable using all
possible weight assignments is termed the hypothesis space of oN and we will refer
Threshold Network Learning in the Presence of Equivalences
to it as N. For a threshold network N, we also introduce the state growth function
SJV(m). This is defined by first considering all computational nodes to be output
nodes, and then counting different output sequences.
SJV(m) =
I{(FJV'(w, il), FJV'(w, i2)'"'' FJV'(w, im))lw : E - 'R}I
. m'!-x
X=(ll, ... ,lm.)EX=
=
where X
[0,1]111 and N' is obtained from
that for all Nand m, BJV(m) ::; SJV(m).
N by setting 0
= C.
We clearly have
Theorem 3.1 [2J If a hypothesis space H has growth function BH(m) then for
any ? > 0 and k > m and
1
O<r<I---
..;?k
the probability that there is a function in H which agrees with a randomly chosen
m sample and has error greater than ? is less than
?k(l-r)2
{km }
k(
)2
BH(m
+
k)
exp
-r? m+ k '
?
l-r -1
This result can be used to obtain the following bound on sample size required for
PAC learnability of a hypothesis space with VC dimension d. The theorem improves
the bounds reported by Blumer et al. (4).
Theorem 3.2 [2J If a hypothesis space H has finite VC dimension d > I, then
there is mo = mo( ?, 6) such that if m > mo then the probability that a hypothesis
consistent with a randomly chosen sample of size m has error greater than ? is less
than 6. A suitable value of rna is
rna =
?
(1
~ 0)
[log ( d / (d6- 1))
+ 2d log
(~) ].
o
For the case when we allow our hypothesis to incorrectly compute the function on
a small fraction of the training sample, we have the following result. Note that we
are still considering the discrete metric and so in the case where we are considering
multiple output feedforward networks a single output in error would count as an
overall error.
Theorem 3.3 [10J Let 0 < ? < 1 and 0 < "( ::; 1. Suppose H is a hypothesis space
of functions from an input space X to a possibly countable set Y, and let v be any
probability measure on S
X x Y. Then the probability (with respect to v m ) that,
for x E sm, there is some h E H such that
=
erll(h)
>
?
and
erx(h)::; (1 - ,,()erll(h)
is at most
"(2?m)
4BH(2m)exp ( --4- .
Furthermore, if H has finite VC dimension d, this quantity is less than 6 for
m> mo(?,6,,,() = "(2?(11_
0)
[410g (~) + 6dlog ('Y2~3?)]'
o
883
884
Shawe-Taylor
4
THE GROWTH FUNCTION FOR EQUIVALENCE
NETWORKS
We will bound the number of output sequences B,,(m) for a number m of inputs
by the number of distinct state sequences S,,(m) that can be generated from the
m inputs by different weight assignments. This follows the approach taken in [10].
Theorem 4.1 Let.N be an equivalence network with W weight equivalence classes
and a total of N computational nodes. Then we can bound S,,(m) by
Idea of Proof: Let Gi, i = 1, ... , n, be the equivalence classes of nodes indexed
as guaranteed by Lemma 2.4 with IGil
Ci and the number of inputs for nodes in
Gi being ni (including the threshold input). Denote by .AIj the network obtained
by taking only the first j node equivalence classes. We omit a proof by induction
that
=
j
S"j (m) :S
II Bi(mci),
i=1
where Bi is the growth function for nodes in the class Gi.
Using the well known bound on the growth function of a threshold node with ni
inputs we obtain
SN( m)
~
ll. (e:;, )n,
Consider the function !( ~) = ~ log~. This is a convex function and so for a set of
values ~1, ..? , ~M, we have that the average of f(~i) is greater than or equal to f
applied to the average of ~i. Consider taking the ~'s to be Ci copies of ni/ci for
each i = 1, ... n. We obtain
12: n-Iog -ni > -W log -W
n
-
N _
,=1 '
Ci -
N
N
or
and so
S,,(m) :S ( emwN)W,
as required. _
The bounds we have obtained make it possible to bound the Vapnik-Chervonenkis
dimension of equivalence networks. Though we we will not need these results, we
give them here for completeness.
Proposition 4.2 The Vapnik-Chervonenkis dimension of an equivalence network
with W weight classes and N computational nodes is bounded by
2Wlog 2 eN.
Threshold Network Learning in the Presence of Equivalences
5
PROOF OF MAIN RESULTS
Using the results of the last section we are now in a position to prove Theorems 2.5
and 2.6.
Proof of Theorem 2.5: (Outline) We use Theorem 3.1 which bounds the probability that a hypothesis with error greater than E can match an m-sample. Substituting our bound on the growth function of an equivalence network and choosing
and r as in [1], we obtain the following bound on the probability
(d _d)1 (e W2Em2)W N
4
W
exp( -Em).
By choosing m> me where me is given by
me
= me(E, 6) = E(1 _1JE)
[ (1.3)
6" +
log
2W log
(6..fN)]
-E-
we guarantee that the above probability is less than 6 as required. _
Our second main result can be obtained more directly.
Proof of Theorem 2.6: (Outline) We use Theorem 3.3 which bounds the probability that a hypothesis with error greater than E can match all but a fraction
(1 -1') of an m-sample. The bound on the sample size is obtained from the probability bound by using the inequality for BH(2m). By adjusting the parameters we
will convert the probability expression to that obtained by substituting our growth
function. We can then read off a sample size by the corresponding substitution in
the sample size formula. Consider setting d = W, E = E' IN and m = N m'. With
these substitutions the sample size formula is
m,
= 1'2 e'(1 - 1Je'IN)
[
410( -4 )
g 6
+ 6 W 10
g
( 4N )
1'2/3 e'
1
as required. _
6
CONCLUSION
The problem of training feedforward neural networks remains a major hurdle to the
application of this approach to large scale systems. A very promising technique for
simplifying the training problem is to include equivalences in the network structure
which can be justified by a priori knowledge of the application domain. This paper
has extended previous results concerning sample sizes for feedforward networks to
cover so called equivalence networks in which weights are constrained in this way.
At the same time we have improved the sample size bounds previously obtained for
standard threshold networks [3] and multiple output networks [10].
885
886
Shawe-Taylor
The results are of the same order as previous results and imply similar bounds on
the Vapnik-Chervonenkis namely 2W log2 eN. They perhaps give circumstancial
evidence for the conjecture that the loga eN factor in this expression is real, in that
the same expression obtains even if the number of computational nodes is increased
by expanding the equivalence classes of weights. Equivalence networks may be a
useful area to search for high growth functions and perhaps show that for certain
classes the VC dimension is O(Wlog N).
References
[1] Martin Anthony, Norman Biggs and John Shawe-Taylor, Learnability and Formal Concept Analysis, RHBNC Department of Computer Science, Technical
Report, CSD-TR-624, 1990.
[2] Martin Anthony, Norman Biggs and John Shawe-Taylor, The learnability of
formal concepts, Proc. COLT '90, Rochester, NY. (eds Mark Fulk and John
Case) (1990) 246-257.
[3] Eric Baum and David Haussler, What size net gives valid generalization, Neural
Computation, 1 (1) (1989) 151-160.
[4] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler and Manfred K. Warmuth, Learnability and the Vapnik-Chervonenkis dimension, JACM, 36 (4)
(1989) 929-965.
[5] David Haussler, preliminary extended abstract, COLT '89.
[6] K. Lang and G.E. Hinton, The development of TDNN architecture for speech
recognition, Technical Report CMU-CS-88-152, Carnegie-Mellon University,
1988.
[7] Y. Ie Cun, A theoretical framework for back propagation, in D. Touretzsky,
editor, Connectionist Models: A Summer School, Morgan-Kaufmann, 1988.
[8] M. Minsky and S. Papert, Perceptrons, expanded edition, MIT Press, Cambridge, USA, 1988.
[9] John Shawe-Taylor, Building Symmetries into Feedforward Network Architectures, Proceedings of First lEE Conference on Artificial Neural Networks, London, 1989, 158-162.
[10] John Shawe-Taylor and Martin Anthony, Sample Sizes for Multiple Output
Feedforward Networks, Network, 2 (1991) 107-117.
[11] Leslie G. Valiant, A theory of the learnable, Communications of the ACM, 27
(1984) 1134-1142.
| 510 |@word km:1 simplifying:1 tr:1 substitution:2 selecting:1 chervonenkis:5 lang:1 john:6 fn:1 shawetaylor:1 warmuth:1 manfred:1 slh:1 completeness:1 bijection:1 node:27 prove:2 introduce:2 indeed:1 multi:1 automatically:1 equipped:1 considering:3 begin:2 bounded:2 notation:1 what:1 unified:1 guarantee:1 certainty:1 every:2 growth:9 uk:1 unit:1 omit:1 before:1 subscript:1 approximately:2 yd:1 au:1 studied:1 equivalence:32 range:1 bi:2 yj:1 tw20:1 area:1 convenient:1 confidence:1 bh:7 restriction:2 measurable:2 baum:3 independently:2 convex:1 haussler:5 setwise:2 target:3 suppose:1 automorphisms:1 hypothesis:15 associate:1 recognition:1 connected:1 automorphism:4 algebra:1 eric:1 biggs:2 ilh:1 distinct:1 london:2 artificial:1 choosing:2 whose:1 larger:2 say:2 otherwise:1 gi:5 sequence:3 net:1 product:1 regularity:1 school:1 c:1 correct:2 stochastic:1 vc:5 require:1 fix:2 generalization:1 preliminary:1 proposition:1 im:1 considered:1 exp:3 mo:6 lm:1 anselm:1 substituting:2 major:1 omitted:1 purpose:1 proc:1 agrees:1 mit:1 clearly:1 rna:3 always:1 improvement:1 fjv:3 lj:1 nand:1 relation:3 overall:1 colt:2 priori:1 development:1 constrained:1 equal:2 future:1 report:2 connectionist:1 develops:1 simplify:1 randomly:2 preserve:1 minsky:2 ourselves:1 lh:1 indexed:1 taylor:10 logarithm:1 oex:1 orbit:2 theoretical:2 increased:1 classify:1 bedford:1 cover:1 assignment:3 leslie:1 subset:2 learnability:4 reported:1 ie:1 lee:1 off:1 ym:2 connectivity:1 reflect:1 possibly:1 creating:1 includes:1 rochester:1 il:1 ni:4 loaded:1 who:1 kaufmann:1 fulk:1 researcher:1 classified:2 ed:1 definition:6 surrey:1 proof:6 adjusting:1 knowledge:1 improves:1 obtainable:1 back:1 improved:2 done:1 though:3 furthermore:1 propagation:1 perhaps:2 building:1 effect:2 usa:1 concept:5 y2:1 norman:2 read:1 i2:1 ehrenfeucht:1 ll:2 outline:2 ydl:1 significant:1 refer:2 mellon:1 cambridge:1 shawe:10 gj:1 base:1 loga:1 termed:1 certain:2 inequality:1 binary:2 touretzsky:1 yi:2 wji:1 preserving:2 morgan:1 greater:5 freely:1 ii:1 multiple:5 technical:3 generalises:1 match:2 concerning:1 iog:1 metric:1 cmu:1 justified:1 background:1 hurdle:1 probably:2 induced:1 presence:4 counting:1 feedforward:8 erv:1 architecture:2 approaching:1 topology:1 restrict:1 reduce:1 idea:1 expression:3 rhbnc:1 speech:1 action:2 useful:1 erx:2 correctly:5 conform:1 discrete:2 carnegie:1 group:3 threshold:19 drawn:6 graph:1 fraction:5 convert:1 i5:1 extends:1 throughout:2 vn:1 bound:19 layer:1 guaranteed:1 summer:1 adapted:1 constraint:1 precisely:1 constrain:1 attempting:1 expanded:1 martin:3 conjecture:1 department:2 according:5 remain:1 em:1 cun:1 dlog:1 taken:1 previously:2 remains:1 count:1 end:1 appropriate:1 egham:1 distinguished:1 andrzej:1 include:1 log2:1 quantity:1 d6:1 me:4 trivial:1 induction:1 pointwise:1 countable:2 unknown:1 sm:3 finite:2 incorrectly:1 extended:2 incorporated:1 communication:1 hinton:1 introduced:1 david:3 pair:1 required:10 specified:1 namely:1 connection:6 below:2 royal:1 max:1 including:1 reliable:2 suitable:2 natural:3 imply:2 lk:1 tdnn:2 sn:1 remarkable:1 consistent:1 editor:1 compatible:1 last:1 copy:1 aij:1 formal:3 allow:1 understand:1 taking:3 dimension:9 valid:1 computes:3 c5:1 simplified:1 far:1 obtains:1 jized:2 search:1 promising:1 learn:1 expanding:1 symmetry:9 improving:2 anthony:4 domain:1 main:5 csd:1 bounding:1 whole:1 edition:1 je:2 en:3 ny:1 wlog:4 papert:1 position:2 explicit:1 lw:1 formula:4 theorem:16 pac:3 learnable:1 x:1 evidence:1 vapnik:5 valiant:2 ci:4 jacm:1 applies:1 acm:1 viewed:1 blumer:2 exposition:1 generalisation:5 lemma:5 called:3 total:1 invariance:1 mci:1 perceptrons:2 holloway:1 college:1 mark:1 incorporate:1 ex:1 |
4,532 | 5,100 | Moment-based Uniform Deviation Bounds for
k-means and Friends
Matus Telgarsky
Sanjoy Dasgupta
Computer Science and Engineering, UC San Diego
{mtelgars,dasgupta}@cs.ucsd.edu
Abstract
Suppose k centers are fit to m points by heuristically minimizing the k-means
cost; what is the corresponding fit over the source distribution? This question is
resolved here for distributions with p ? 4 bounded moments; in particular, the
difference between the sample cost and distribution cost decays with m and p as
mmin{?1/4,?1/2+2/p} . The essential technical contribution is a mechanism to uniformly control deviations in the face of unbounded parameter sets, cost functions,
and source distributions. To further demonstrate this mechanism, a soft clustering
variant of k-means cost is also considered, namely the log likelihood of a Gaussian mixture, subject to the constraint that all covariance matrices have bounded
spectrum. Lastly, a rate with refined constants is provided for k-means instances
possessing some cluster structure.
1
Introduction
Suppose a set of k centers {pi }ki=1 is selected by approximate minimization of k-means cost; how
does the fit over the sample compare with the fit over the distribution? Concretely: given m points
sampled from a source distribution ?, what can be said about the quantities
Z
m
1 X
2
2
min kxj ? pi k2 ? min kx ? pi k2 d?(x)
i
i
m
j=1
!
!
Z
m
k
k
1 X
X
X
ln
?i p?i (xj ) ? ln
?i p?i (x) d?(x)
m
j=1
i=1
i=1
(k-means),
(1.1)
(soft k-means),
(1.2)
where each p?i denotes the density of a Gaussian with a covariance matrix whose eigenvalues lie in
some closed positive interval.
The literature offers a wealth of information related to this question. For k-means, there is firstly a
consistency result: under some identifiability conditions, the global minimizer over the sample will
converge to the global minimizer over the distribution as the sample size m increases [1]. Furthermore, if the distribution is bounded, standard tools can provide deviation inequalities [2, 3, 4]. For
the second problem, which is maximum likelihood of a Gaussian mixture (thus amenable to EM
[5]), classical results regarding the consistency of maximum likelihood again provide that, under
some identifiability conditions, the optimal solutions over the sample converge to the optimum over
the distribution [6].
The task here is thus: to provide finite sample guarantees for these problems, but eschewing boundedness, subgaussianity, and similar assumptions in favor of moment assumptions.
1
1.1
Contribution
The results here are of the following form: given m examples from a distribution with a few bounded
moments, and any set of parameters beating some fixed cost c, the corresponding deviations in cost
(as in eq. (1.1) and eq. (1.2)) approach O(m?1/2 ) with the availability of higher moments.
? In the case of k-means (cf. Corollary 3.1), p ? 4 moments suffice, and the rate is
O(mmin{?1/4,?1/2+2/p} ). For Gaussian mixtures (cf. Theorem 5.1), p ? 8 moments
suffice, and the rate is O(m?1/2+3/p ).
? The parameter c allows these guarantees to hold for heuristics. For instance, suppose k
centers are output by Lloyd?s method. While Lloyd?s method carries no optimality guarantees, the results here hold for the output of Lloyd?s method simply by setting c to be the
variance of the data, equivalently the k-means cost with a single center placed at the mean.
? The k-means and Gaussian mixture costs are only well-defined when the source distribution has p ? 2 moments. The condition of p ? 4 moments, meaning the variance has a
variance, allows consideration of many heavy-tailed distributions, which are ruled out by
boundedness and subgaussianity assumptions.
The main technical byproduct of the proof is a mechanism to deal with the unboundedness of the
cost function; this technique will be detailed in Section 3, but the difficulty and its resolution can be
easily sketched here.
For a single set of centers P , the deviations in eq. (1.1) may be controlled with an application of
Chebyshev?s inequality. But this does not immediately grant deviation bounds on another set of
centers P 0 , even if P and P 0 are very close: for instance, the difference between the two costs will
grow as successively farther and farther away points are considered.
The resolution is to simply note that there is so little probability mass in those far reaches that the
cost there is irrelevant. Consider a single center p (and assume x 7? kx ? pk22 is integrable); the
dominated convergence theorem grants
Z
Z
kx ? pk22 d?(x) ?
kx ? pk22 d?(x),
where Bi := {x ? Rd : kx ? pk2 ? i}.
Bi
R
In other words, a ball Bi may be chosen so that B c kx ? pk22 d?(x) ? 1/1024. Now consider some
i
p0 with kp ? p0 k2 ? i. Then
Z
Z
Z
1
kx ? pk22 d?(x) ?
(kx ? pk2 + kp ? p0 k2 )2 d?(x) ? 4
kx ? p0 k22 d?(x) ?
.
256
Bic
Bic
Bic
In this way, a single center may control the outer deviations of whole swaths of other centers. Indeed,
those choices outperforming the reference score c will provide a suitable swath. Of course, it would
be nice to get a sense of the size of Bi ; this however is provided by the moment assumptions.
The general strategy is thus to split consideration into outer deviations, and local deviations. The
local deviations may be controlled by standard techniques. To control outer deviations, a single pair
of dominating costs ? a lower bound and an upper bound ? is controlled.
This technique can be found in the proof of the consistency of k-means due to Pollard [1]. The
present work shows it can also provide finite sample guarantees, and moreover be applied outside
hard clustering.
The content here is organized as follows. The remainder of the introduction surveys related work,
and subsequently Section 2 establishes some basic notation. The core deviation technique, termed
outer bracketing (to connect it to the bracketing technique from empirical process theory), is presented along with the deviations of k-means in Section 3. The technique is then applied in Section 5
to a soft clustering variant, namely log likelihood of Gaussian mixtures having bounded spectra. As
a reprieve between these two heavier bracketing sections, Section 4 provides a simple refinement for
k-means which can adapt to cluster structure.
All proofs are deferred to the appendices, however the construction and application of outer brackets
is sketched in the text.
2
1.2
Related Work
As referenced earlier, Pollard?s work deserves special mention, both since it can be seen as the origin
of the outer bracketing technique, and since it handled k-means under similarly slight assumptions
(just two moments, rather than the four here) [1, 7]. The present work hopes to be a spiritual
successor, providing finite sample guarantees, and adapting technique to a soft clustering problem.
In the machine learning community, statistical guarantees for clustering have been extensively studied under the topic of clustering stability [4, 8, 9, 10]. One formulation of stability is: if parameters are learned over two samples, how close are they? The technical component of these works
frequently involves finite sample guarantees, which in the works listed here make a boundedness
assumption, or something similar (for instance, the work of Shamir and Tishby [9] requires the cost
function to satisfy a bounded differences condition). Amongst these finite sample guarantees, the
finite sample guarantees due to Rakhlin and Caponnetto [4] are similar to the development here after
the invocation of the outer bracket: namely, a covering argument controls deviations over a bounded
set. The results of Shamir and Tishby [10] do not make a boundedness assumption, but the main
results are not finite sample guarantees; in particular, they rely on asymptotic results due to Pollard
[7].
There are many standard tools which may be applied to the problems here, particularly if a boundedness assumption is made [11, 12]; for instance, Lugosi and Zeger [2] use tools from VC theory to
handle k-means in the bounded case. Another interesting work, by Ben-david [3], develops specialized tools to measure the complexity of certain clustering problems; when applied to the problems
of the type considered here, a boundedness assumption is made.
A few of the above works provide some negative results and related commentary on the topic of
uniform deviations for distributions with unbounded support [10, Theorem 3 and subsequent discussion] [3, Page 5 above Definition 2]. The primary ?loophole? here is to constrain consideration to
those solutions beating some reference score c. It is reasonable to guess that such a condition entails that a few centers must lie near the bulk of the distribution?s mass; making this guess rigorous
is the first step here both for k-means and for Gaussian mixtures, and moreover the same consequence was used by Pollard for the consistency of k-means [1]. In Pollard?s work, only optimal
choices were considered, but the same argument relaxes to arbitrary c, which can thus encapsulate
heuristic schemes, and not just nearly optimal ones. (The secondary loophole is to make moment
assumptions; these sufficiently constrain the structure of the distribution to provide rates.)
In recent years, the empirical process theory community has produced a large body of work on the
topic of maximum likelihood (see for instance the excellent overviews and recent work of Wellner
[13], van der Vaart and Wellner [14], Gao and Wellner [15]). As stated previously, the choice of the
term ?bracket? is to connect to empirical process theory. Loosely stated, a bracket is simply a pair
of functions which sandwich some set of functions; the bracketing entropy is then (the logarithm of)
the number of brackets needed to control a particular set of functions. In the present work, brackets
are paired with sets which identify the far away regions they are meant to control; furthermore,
while there is potential for the use of many outer brackets, the approach here is able to make use of
just a single outer bracket. The name bracket is suitable, as opposed to cover, since the bracketing
elements need not be members of the function class being dominated. (By contrast, Pollard?s use in
the proof of the consistency of k-means was more akin to covering, in that remote fluctuations were
compared to that of a a single center placed at the origin [1].)
2
Notation
The ambient space will always be the Euclidean space Rd , though a few results will be stated for a
general domain X . The source probability measure will be ?, and when a finite sample of size m
is available, ?? is the corresponding empirical measure. Occasionally, the variable ? will refer to an
arbitrary probability measure (where ? and ?? will serve as relevant instantiations).
Both integral and
R
expectation
notation
will
be
used;
for
example,
E(f
(X))
=
E
(f
(X)
=
f
(x)d?(x);
for integrals,
?
R
R
f
(x)d?(x)
=
f
(x)1[x
?
B]d?(x),
where
1
is
the
indicator
function.
The
moments
of ? are
B
defined as follows.
Definition 2.1. Probability measure ? has order-p moment bound M with respect to norm k?k when
E? kX ? E? (X)kl ? M for 1 ? l ? p.
3
For example, the typical setting of k-means uses norm k?k2 , and at least two moments are needed for
the cost over ? to be finite; the condition here of needing 4 moments can be seen as naturally arising
via Chebyshev?s inequality. Of course, the availability of higher moments is beneficial, dropping the
rates here from m?1/4 down to m?1/2 . Note that the basic controls derived from moments, which
are primarily elaborations of Chebyshev?s inequality, can be found in Appendix A.
The k-means analysis will generalize slightly beyond the single-center cost x 7? kx ? pk22 via
Bregman divergences [16, 17].
Definition 2.2. Given a convex differentiable function f : X ? R, the corresponding Bregman
divergence is Bf (x, y) := f (x) ? f (y) ? h?f (y), x ? yi.
Not all Bregman divergences are handled; rather, the following regularity conditions will be placed
on the convex function.
Definition 2.3. A convex differentiable function f is strongly convex with modulus r1 and has Lipschitz gradients with constant r2 , both respect to some norm k ? k, when f (respectively) satisfies
f (?x + (1 ? ?)y) ? ?f (x) + (1 ? ?)f (y) ?
r1 ?(1 ? ?)
kx ? yk2 ,
2
k?f (x) ? ?f (y)k? ? r2 kx ? yk,
where x, y ? X , ? ? [0, 1], and k ? k? is the dual of k ? k. (The Lipschitz gradient condition is
sometimes called strong smoothness.)
These conditions are a fancy way of saying the corresponding Bregman divergence is sandwiched
between two quadratics (cf. Lemma B.1).
Definition 2.4. Given a convex differentiable function f : Rd ? R which is strongly convex and
has Lipschitz gradients with respective constants r1 , r2 with respect to norm k ? k, the hard k-means
cost of a single point x according to a set of centers P is
?f (x; P ) := min Bf (x, p).
p?P
The corresponding k-means cost of a set of points (or distribution) is thus computed as
E? (?f (X; P )), and let Hf (?; c, k) denote all sets of at most k centers beating cost c, meaning
Hf (?; c, k) := {P : |P | ? k, E? (?f (X; P )) ? c}.
For example, choosing norm k ? k2 and convex function f (x) = kxk22 (which has r1 = r2 = 2), the
corresponding Bregman divergence is Bf (x, y) = kx ? yk22 , and E??(?f (X; P )) denotes the vanilla
k-means cost of some finite point set encoded in the empirical measure ??.
The hard clustering guarantees will work with Hf (?; c, k), where ? can be either the source distribution ?, or its empirical counterpart ??. As discussed previously, it is reasonable to set c to simply
the sample variance of the data, or a related estimate of the true variance (cf. Appendix A).
Lastly, the class of Gaussian mixture penalties is as follows.
Definition 2.5. Given Gaussian parameters ? := (?, ?), let p? denote Gaussian density
1
1
exp ? (x ? ?i )T ?i?1 (x ? ?i ) .
p? (x) = p
2
(2?)d |?i |
P
Given Gaussian mixture parameters (?, ?) = ({?i }ki=1 , {?i }ki=1 ) with ? ? 0 and i ?i = 1
(written ? ? ?), the Gaussian mixture cost at a point x is
!
k
X
?g (x; (?, ?)) := ?g (x; {(?i , ?i ) = (?i , ?i , ?i )}ki=1 ) := ln
?i p?i (x) ,
i=1
Lastly, given a measure ?, bound k on the number of mixture parameters, and spectrum bounds
0 < ?1 ? ?2 , let Smog (?; c, k, ?1 , ?2 ) denote those mixture parameters beating cost c, meaning
Smog (?; c, k, ?1 , ?2 ) := {(?, ?) : ?1 I ?i ?2 I, |?| ? k, ? ? ?, E? (?g (X; (?, ?))) ? c} .
While a condition of the form ? ?1 I is typically enforced in practice (say, with a Bayesian prior,
or by ignoring updates which shrink the covariance beyond this point), the condition ? ?2 I is
potentially violated. These conditions will be discussed further in Section 5.
4
3
Controlling k-means with an Outer Bracket
First consider the special case of k-means cost.
Corollary 3.1. Set f (x) := kxk22 , whereby ?f is the k-means cost. Let real c ? 0 and probability
measure ? be given with order-p moment bound M with respect to k ? k2 , where p ? 4 is a positive
multiple of 4. Define the quantities
?
c1 := (2M )1/p + 2c, M1 := M 1/(p?2) + M 2/p , N1 := 2 + 576d(c1 + c21 + M1 + M12 ).
Then with probability at least 1 ? 3? over the draw of a sample of size m ?
max{(p/(2p/4+2 e))2 , 9 ln(1/?)}, every set of centers P ? Hf (?
?; c, k) ? Hf (?; c, k) satisfies
Z
Z
?f (x; P )d?(x) ? ?f (x; P )d?
?(x)
s
r p/4 4/p !
1
2 ep 2
(mN1 )dk
?1/2+min{1/4,2/p}
2
2
?m
4 + (72c1 + 32M1 )
ln
+
.
2
?
8m1/2 ?
One artifact of the moment approach (cf. Appendix A), heretofore ignored, is the term (2/?)4/p .
While this may seem inferior to ln(2/?), note that the choice p = 4 ln(2/?)/ ln(ln(2/?)) suffices to
make the two equal.
Next consider a general bound for Bregman divergences. This bound has a few more parameters
than Corollary 3.1. In particular, the term , which is instantiated to m?1/2+1/p in the proof of
Corollary 3.1, catches the mass of points discarded due to the outer bracket, as well as the resolution
of the (inner) cover. The parameter p0 , which controls the tradeoff between m and 1/?, is set to p/4
in the proof of Corollary 3.1.
Theorem 3.2. Fix a reference norm k ? k throughout the following. Let probability measure ? be
given with order-p moment bound M where p ? 4, a convex function f with corresponding constants
r1 and r2 , reals c and > 0, and integer 1 ? p0 ? p/2 ? 1 be given. Define the quantities
p
1/(p?2i)
1/p
,
RB := max (2M ) + 4c/r1 , max0 (M/)
i?[p ]
p
p
RC := r2 /r1 (2M )1/p + 4c/r1 + RB + RB ,
B := x ? Rd : kx ? E(X)k ? RB ,
C := x ? Rd : kx ? E(X)k ? RC ,
r
? := min
,
,
2r2 2(RB + RC )r2
and let N be a cover of C by k ? k-balls with radius ? ; in the case that k ? k is an lp norm, the size of
this cover has bound
d
2RC d
|N | ? 1 +
.
?
Then with probability at least 1 ? 3? over the draw of a sample of size m ?
0
max{p0 /(e2p ), 9 ln(1/?)}, every set of centers P ? Hf (?; c, k) ? Hf (?
?; c, k) satisfies
s
Z
r p0 0 1/p0
Z
1
2|N |k
e2 p 2
2
?f (x; P )d?(x) ? ?f (x; P )d?
ln
+
.
?(x) ? 4+4r2 RC
2m
?
2m
?
3.1
Compactification via Outer Brackets
The outer bracket is defined as follows.
Definition 3.3. An outer bracket for probability measure ? at scale consists of two triples, one
each for lower and upper bounds.
5
1. The function `, function class Z` , Rand set B` satisfy two conditions: if x ? B`c and ? ? Z` ,
then `(x) ? ?(x), and secondly | B c `(x)d?(x)| ? .
`
2. Similarly, function u, functionR class Zu , and set Bu satisfy: if x ? Buc and ? ? Zu , then
u(x) ? ?(x), and secondly | B c u(x)d?(x)| ? .
u
Direct from the definition, given bracketing functions (`, u), a bracketed function ?f (?; P ), and the
bracketing set B := Bu ? B` ,
Z
Z
Z
? ?
`(x)d?(x) ?
?f (x; P )d?(x) ?
u(x)d?(x) ? ;
(3.4)
Bc
Bc
Bc
in other words, as intended, this mechanism allows deviations on B c to be discarded. Thus to
uniformly control the deviations of the dominated functions Z := Zu ? Z` over the set B c , it
suffices to simply control the deviations of the pair (`, u).
The following lemma shows that a bracket exists for {?f (?; P ) : P ? Hf (?; c, k)} and compact B,
and moreover that this allows sampled points and candidate centers in far reaches to be deleted.
Lemma 3.5. Consider the setting and definitions in Theorem 3.2, but additionally define
r
1/p0
M 0 ep0 2
0
p0
2
.
M := 2 ,
`(x) := 0,
u(x) := 4r2 kx ? E(X)k ,
?? := +
2m
?
The following statements hold with probability at least 1 ? 2? over a draw of size m ?
max{p0 /(M 0 e), 9 ln(1/?)}.
1. (u, `) is an outer bracket for ? at scale ? := with sets B` = Bu = B and Z` = Zu =
{?f (?; P ) : P ? Hf (?
?; c, k)?Hf (?; c, k)}, and furthermore the pair (u, `) is also an outer
bracket for ?? at scale ?? with the same sets.
2. For every P ? Hf (?
?; c, k) ? Hf (?; c, k),
Z
Z
? ? = .
?f (x; P )d?(x) ?
?
(x;
P
?
C)d?(x)
f
B
and
Z
Z
? ??.
?f (x; P )d?
?
(x)
?
?
(x;
P
?
C)d?
?
(x)
f
B
The proof of Lemma 3.5 has roughly the following outline.
1. Pick some ball B0 which has probability mass at least 1/4. It is not possible for an element
of Hf (?
?; c, k) ? Hf (?; c, k) to have all centers far from
p B0 , since otherwise the cost is
larger than c. (Concretely, ?far from? means at least 4c/r1 away; note that this term
appears in the definitions of B and C in Theorem 3.2.) Consequently, at least one center
lies near to B0 ; this reasoning was also the first step in the k-means consistency proof due
to k-means Pollard [1].
2. It is now easy to dominate P ? Hf (?
?; c, k) ? Hf (?; c, k) far away from B0 . In particular,
choose any p0 ? B0 ? P , which was guaranteed to exist in the preceding point; since
minp?P Bf (x, p) ? Bf (x, p0 ) holds for all x, it suffices to dominate p0 . This domination
proceeds exactly as discussed in the introduction; in fact, the factor 4 appeared there, and
again appears in the u here, for exactly the same reason. Once again, similar reasoning can
be found in the proof by Pollard [1].
3. Satisfying the integral conditions over ? is easy: it suffices to make B huge. To control the
size of B0 , as well as the size of B, and moreover the deviations of the bracket over B, the
moment tools from Appendix A are used.
Now turning consideration back to the proof of Theorem 3.2, the above bracketing allows the removal of points and centers outside of a compact set (in particular, the pair of compact sets B and
C, respectively). On the remaining truncated data and set of centers, any standard tool suffices; for
mathematical convenience, and also to fit well with the use of norms in the definition of moments
as well as the conditions on the convex function f providing the divergence Bf , norm structure
used throughout the other properties, covering arguments are used here. (For details, please see
Appendix B.)
6
4
Interlude: Refined Estimates via Clamping
So far, rates have been given that guarantee uniform convergence when the distribution has a few
moments, and these rates improve with the availability of higher moments. These moment conditions, however, do not necessarily reflect any natural cluster structure in the source distribution. The
purpose of this section is to propose and analyze another distributional property which is intended
to capture cluster structure. To this end, consider the following definition.
Definition 4.1. Real number R and compact set C are a clamp for probability measure ? and family
of centers Z and cost ?f at scale > 0 if every P ? Z satisfies
|E? (?f (X; P )) ? E? (min {?f (X; P ? C) , R})| ? .
Note that this definition is similar to the second part of the outer bracket guarantee in Lemma 3.5,
and, predictably enough, will soon lead to another deviation bound.
Example 4.2. If the distribution has bounded support, then choosing a clamping value R and clamping set C respectively slightly larger than the support size and set is sufficient: as was reasoned in
the construction of outer brackets, if no centers are close to the support, then the cost is bad. Correspondingly, the clamped set of functions Z should again be choices of centers whose cost is not too
high.
For a more interesting example, suppose ? is supported on k small balls of radius R1 , where the
distance between their respective centers is some R2 R1 . Then by reasoning similar to the
bounded case, all choices of centers achieving a good cost will place centers near to each ball, and
thus the clamping value can be taken closer to R1 .
Of course, the above gave the existence of clamps under favorable conditions. The following shows
that outer brackets can be used to show the existence of clamps in general. In fact, the proof is very
short, and follows the scheme laid out in the bounded example above: outer bracketing allows the
restriction of consideration to a bounded set, and some algebra from there gives a conservative upper
bound for the clamping value.
Proposition 4.3. Suppose the setting and definitions of Lemma 3.5, and additionally define
2
R := 2((2M )2/p + RB
).
Then (C, R) is a clamp for measure ? and center Hf (?; c, k) at scale , and with probability at least
1 ? 3? over a draw of size m ? max{p0 /(M 0 e), 9 ln(1/?)}, it is also a clamp for ?? and centers
Hf (?
?; c, k) at scale ??.
The general guarantee using clamps is as follows. The proof is almost the same as for Theorem 3.2,
but note that this statement is not used quite as readily, since it first requires the construction of
clamps.
Theorem 4.4. Fix a norm k ? k. Let (R, C) be a clamp for probability measure ? and empirical
counterpart ?? over some center class Z and cost ?f at respective scales ? and ??, where f has
corresponding convexity constants r1 and r2 . Suppose C is contained within a ball of radius RC ,
let > 0 be given, define scale parameter
r
r1
? := min
,
,
2r2 2r2 R3
and let N be a cover of C by k ? k-balls of radius ? (as per lemma B.4, if k ? k is an lp norm, then
|N | ? (1 + (2RC d)/? )d suffices). Then with probability at least 1 ? ? over the draw of a sample of
size m ? p0 /(M 0 e), every set of centers P ? Z satisfies
s
Z
Z
1
2|N |k
2
?f (x; P )d?(x) ? ?f (x; P )d?
ln
.
?(x) ? 2 + ? + ?? + R
2m
?
Before adjourning this section, note that clamps and outer brackets disagree on the treatment of the
outer regions: the former replaces the cost there with the fixed value R, whereas the latter uses the
value 0. On the technical side, this is necessitated by the covering argument used to produce the
final theorem: if the clamping operation instead truncated beyond a ball of radius R centered at each
p ? P , then the deviations would be wild as these balls moved and suddenly switched the value at a
point from 0 to something large. This is not a problem with outer bracketing, since the same points
(namely B c ) are ignored by every set of centers.
7
5
Mixtures of Gaussians
Before turning to the deviation bound, it is a good place to discuss the condition ?1 I ? ?2 I,
which must be met by every covariance matrix of every constituent Gaussian in a mixture.
The lower bound ?1 I ?, as discussed previously, is fairly common in practice, arising either
via a Bayesian prior, or by implementing EM with an explicit condition that covariance updates are
discarded when the eigenvalues fall below some threshold. In the analysis here, this lower bound is
used to rule out two kinds of bad behavior.
1. Given a budget of at least 2 Gaussians, and a sample of at least 2 distinct points, arbitrarily
large likelihood may be achieved by devoting one Gaussian to one point, and shrinking its
covariance. This issue destroys convergence properties of maximum likelihood, since the
likelihood score may be arbitrarily large over every sample, but is finite for well-behaved
distributions. The condition ?1 I ? rules this out.
2. Another phenomenon is a ?flat? Gaussian, meaning a Gaussian whose density is high along
a lower dimensional manifold, but small elsewhere. Concretely, consider a Gaussian over
R2 with covariance ? = diag(?, ? ?1 ); as ? decreases, the Gaussian has large density on
a line, but low density elsewhere. This phenomenon is distinct from the preceding in that
it does not produce arbitrarily large likelihood scores over finite samples. The condition
?1 I ? rules this situation out as well.
In both the hard and soft clustering analyses here, a crucial early step allows the assertion that good
scores in some region mean the relevant parameter is nearby. For the case of Gaussians, the condition
?1 I ? makes this problem manageable, but there is still the possibility that some far away, fairly
uniform Gaussian has reasonable density. This case is ruled out here via ?2 I ?.
Theorem 5.1. Let probability measure ? be given with order-p moment bound M according to norm
k ? k2 where p ? 8 is a positive multiple of 4, covariance bounds 0 < ?1 ? ?2 with ?1 ? 1 for
simplicity, and real c ?
1/2 be given. Then with probability at least 1 ?
5? over the draw of a
sample of size m ? max (p/(2p/4+2 e))2 , 8 ln(1/?), d2 ln(??2 )2 ln(1/?) , every set of Gaussian
mixture parameters (?, ?) ? Smog (?
?; c, k, ?1 , ?2 ) ? Smog (?; c, k, ?1 , ?2 ) satisfies
Z
Z
?g (x; (?, ?))d?(x) ? ?g (x; (?, ?))d?
?
(x)
p
= O m?1/2+3/p 1 + ln(m) + ln(1/?) + (1/?)4/p ,
where the O(?) drops numerical constants, polynomial terms depending on c, M , d, and k, ?2 /?1 ,
and ln(?2 /?1 ), but in particular has no sample-dependent quantities.
The proof follows the scheme of the hard clustering analysis. One distinction is that the outer bracket
now uses both components; the upper component is the log of the largest possible density ? indeed,
it is ln((2??1 )?d/2 ) ? whereas the lower component is a function mimicking the log density of
the steepest possible Gaussian ? concretely, the lower bracket?s definition contains the expression
ln((2??2 )?d/2 ) ? 2kx ? E? (X)k22 /?1 , which lacks the normalization of a proper Gaussian, highlighting the fact that bracketing elements need not be elements of the class. Superficially, a second
distinction with the hard clustering case is that far away Gaussians can not be entirely ignored on
local regions; the influence is limited, however, and the analysis proceeds similarly in each case.
Acknowledgments
The authors thank the NSF for supporting this work under grant IIS-1162581.
8
References
[1] David Pollard. Strong consistency of k-means clustering. The Annals of Statistics, 9(1):135?
140, 1981.
[2] Gbor Lugosi and Kenneth Zeger. Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding. IEEE Trans. Inform. Theory, 40:
1728?1740, 1994.
[3] Shai Ben-david. A framework for statistical clustering with a constant time approximation
algorithms for k-median clustering. In COLT, pages 415?426. Springer, 2004.
[4] Alexander Rakhlin and Andrea Caponnetto. Stability of k-means clustering. In NIPS, pages
1121?1128, 2006.
[5] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. Wiley, 2 edition,
2001.
[6] Thomas S. Ferguson. A course in large sample theory. Chapman & Hall, 1996.
[7] David Pollard. A central limit theorem for k-means clustering. The Annals of Probability, 10
(4):919?926, 1982.
[8] Shai Ben-david, Ulrike Von Luxburg, and D?avid P?al. A sober look at clustering stability. In In
COLT, pages 5?19. Springer, 2006.
[9] Ohad Shamir and Naftali Tishby. Cluster stability for finite samples. In Annals of Probability,
10(4), pages 919?926, 1982.
[10] Ohad Shamir and Naftali Tishby. Model selection and stability in k-means clustering. In
COLT, 2008.
[11] St?ephane Boucheron, Olivier Bousquet, and G?abor Lugosi. Theory of classification: a survey
of recent advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[12] St?ephane Boucheron, G?abor Lugosi, and Pascal Massart. Concentration Inequalities: A
Nonasymptotic Theory of Independence. Oxford, 2013.
[13] Jon Wellner. Consistency and rates of convergence for maximum likelihood estimators via
empirical process theory. 2005.
[14] Aad van der Vaart and Jon Wellner. Weak Convergence and Empirical Processes. Springer,
1996.
[15] FuChang Gao and Jon A. Wellner. On the rate of convergence of the maximum likelihood
estimator of a k-monotone density. Science in China Series A: Mathematics, 52(7):1525?1538,
2009.
[16] Yair Al Censor and Stavros A. Zenios. Parallel Optimization: Theory, Algorithms and Applications. Oxford University Press, 1997.
[17] Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with
Bregman divergences. Journal of Machine Learning Research, 6:1705?1749, 2005.
[18] Terence Tao.
254a notes 1:
Concentration of measure,
January
2010.
URL
http://terrytao.wordpress.com/2010/01/03/
254a-notes-1-concentration-of-measure/.
[19] I. F. Pinelis and S. A. Utev. Estimates of the moments of sums of independent random variables. Teor. Veroyatnost. i Primenen., 29(3):554?557, 1984. Translation to English by Bernard
Seckler.
[20] Shai Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis,
The Hebrew University of Jerusalem, July 2007.
[21] Jean-Baptiste Hiriart-Urruty and Claude Lemar?echal.
Springer Publishing Company, Incorporated, 2001.
9
Fundamentals of Convex Analysis.
| 5100 |@word manageable:1 polynomial:1 norm:12 duda:1 bf:6 heuristically:1 d2:1 heretofore:1 covariance:8 p0:17 pick:1 mention:1 boundedness:6 carry:1 moment:28 contains:1 score:5 series:1 bc:3 com:1 must:2 written:1 readily:1 subsequent:1 zeger:2 numerical:1 drop:1 update:2 selected:1 guess:2 steepest:1 core:1 farther:2 short:1 provides:1 quantizer:1 firstly:1 unbounded:2 rc:7 along:2 mathematical:1 direct:1 consists:1 wild:1 indeed:2 behavior:1 andrea:1 frequently:1 roughly:1 company:1 little:1 provided:2 notation:3 bounded:12 suffice:2 mass:4 moreover:4 what:2 kind:1 ghosh:1 guarantee:14 every:10 exactly:2 k2:8 control:11 grant:3 encapsulate:1 positive:3 before:2 engineering:1 local:3 referenced:1 limit:1 consequence:1 oxford:2 fluctuation:1 lugosi:4 studied:1 china:1 limited:1 bi:4 c21:1 acknowledgment:1 practice:2 empirical:10 universal:1 adapting:1 word:2 get:1 convenience:1 close:3 selection:1 influence:1 restriction:1 center:32 jerusalem:1 convex:10 survey:2 resolution:3 mtelgars:1 simplicity:1 immediately:1 rule:3 estimator:2 dominate:2 pk2:2 stability:6 handle:1 annals:3 diego:1 suppose:6 construction:3 shamir:4 controlling:1 olivier:1 us:3 origin:2 element:4 satisfying:1 particularly:1 distributional:1 ep:1 capture:1 region:4 remote:1 decrease:1 yk:1 convexity:1 complexity:1 algebra:1 serve:1 resolved:1 kxj:1 easily:1 instantiated:1 distinct:2 eschewing:1 kp:2 outside:2 refined:2 choosing:2 shalev:1 whose:3 heuristic:2 encoded:1 dominating:1 larger:2 say:1 quite:1 otherwise:1 jean:1 favor:1 statistic:2 vaart:2 final:1 online:1 eigenvalue:2 differentiable:3 claude:1 propose:1 clamp:9 srujana:1 hiriart:1 remainder:1 relevant:2 moved:1 pk22:6 constituent:1 convergence:7 cluster:5 optimum:1 regularity:1 r1:14 produce:2 telgarsky:1 ben:3 depending:1 friend:1 pinelis:1 b0:6 eq:3 strong:2 c:1 involves:1 met:1 radius:5 subsequently:1 vc:1 centered:1 successor:1 implementing:1 sober:1 suffices:6 fix:2 proposition:1 secondly:2 hold:4 sufficiently:1 considered:4 hall:1 exp:1 matus:1 early:1 purpose:1 favorable:1 largest:1 wordpress:1 establishes:1 tool:6 minimization:1 hope:1 destroys:1 gaussian:22 always:1 rather:2 functionr:1 corollary:5 derived:1 likelihood:11 contrast:1 rigorous:1 sense:1 censor:1 dependent:1 ferguson:1 typically:1 abor:2 tao:1 sketched:2 issue:1 dual:1 mimicking:1 colt:3 classification:2 pascal:1 development:1 special:2 fairly:2 uc:1 equal:1 once:1 devoting:1 having:1 reasoned:1 chapman:1 look:1 nearly:1 jon:3 ephane:2 develops:1 richard:1 few:6 primarily:1 divergence:8 intended:2 sandwich:1 n1:1 huge:1 possibility:1 deferred:1 mixture:14 bracket:24 amenable:1 ambient:1 bregman:7 integral:3 closer:1 byproduct:1 respective:3 ohad:2 necessitated:1 loosely:1 logarithm:1 euclidean:1 ruled:2 joydeep:1 instance:6 soft:5 earlier:1 cover:5 assertion:1 deserves:1 cost:32 deviation:22 uniform:4 tishby:4 too:1 connect:2 st:2 density:9 fundamental:1 bu:3 terence:1 thesis:1 again:4 reflect:1 successively:1 opposed:1 choose:1 central:1 von:1 potential:1 nonasymptotic:1 lloyd:3 coding:2 availability:3 satisfy:3 bracketed:1 closed:1 analyze:1 ulrike:1 hf:18 parallel:1 shai:3 identifiability:2 contribution:2 variance:5 merugu:1 identify:1 generalize:1 weak:1 bayesian:2 produced:1 reach:2 inform:1 definition:16 e2:1 naturally:1 proof:13 sampled:2 treatment:1 organized:1 back:1 appears:2 higher:3 rand:1 formulation:1 though:1 strongly:2 shrink:1 furthermore:3 just:3 lastly:3 banerjee:1 lack:1 artifact:1 behaved:1 lossy:1 modulus:1 name:1 k22:2 true:1 counterpart:2 former:1 boucheron:2 dhillon:1 deal:1 mmin:2 inferior:1 covering:4 whereby:1 please:1 naftali:2 outline:1 demonstrate:1 reasoning:3 meaning:4 consideration:5 arindam:1 possessing:1 common:1 specialized:1 overview:1 stork:1 discussed:4 slight:1 m1:4 refer:1 smoothness:1 rd:5 vanilla:1 consistency:8 mathematics:1 similarly:3 entail:1 yk2:1 something:2 recent:3 irrelevant:1 termed:1 occasionally:1 certain:1 inequality:5 outperforming:1 arbitrarily:3 der:2 yi:1 integrable:1 seen:2 commentary:1 preceding:2 converge:2 july:1 ii:1 multiple:2 needing:1 caponnetto:2 technical:4 adapt:1 offer:1 elaboration:1 hart:1 baptiste:1 paired:1 controlled:3 variant:2 basic:2 expectation:1 sometimes:1 normalization:1 achieved:1 c1:3 whereas:2 interval:1 wealth:1 grow:1 source:9 bracketing:12 crucial:1 median:1 massart:1 subject:1 member:1 seem:1 integer:1 near:3 yk22:1 split:1 easy:2 relaxes:1 enough:1 xj:1 fit:5 bic:3 gave:1 independence:1 zenios:1 inner:1 regarding:1 avid:1 tradeoff:1 chebyshev:3 expression:1 heavier:1 handled:2 url:1 wellner:6 akin:1 penalty:1 peter:1 pollard:10 ignored:3 detailed:1 listed:1 extensively:1 gbor:1 http:1 exist:1 nsf:1 fancy:1 arising:2 per:1 rb:6 bulk:1 dasgupta:2 dropping:1 four:1 threshold:1 achieving:1 deleted:1 kenneth:1 monotone:1 year:1 sum:1 enforced:1 luxburg:1 place:2 saying:1 reasonable:3 throughout:2 family:1 laid:1 almost:1 draw:6 appendix:6 entirely:1 bound:20 ki:4 guaranteed:1 quadratic:1 replaces:1 constraint:1 constrain:2 flat:1 dominated:3 nearby:1 bousquet:1 argument:4 min:7 optimality:1 according:2 ball:9 beneficial:1 slightly:2 em:2 lp:2 making:1 taken:1 ln:22 previously:3 discus:1 r3:1 mechanism:4 needed:2 urruty:1 end:1 available:1 operation:1 gaussians:4 away:6 yair:1 existence:2 thomas:1 denotes:2 clustering:19 cf:5 remaining:1 publishing:1 classical:1 sandwiched:1 suddenly:1 question:2 quantity:4 strategy:1 primary:1 concentration:3 said:1 amongst:1 gradient:3 distance:1 thank:1 outer:24 topic:3 manifold:1 reason:1 providing:2 minimizing:1 hebrew:1 equivalently:1 potentially:1 statement:2 negative:1 stated:3 design:1 proper:1 upper:4 m12:1 disagree:1 discarded:3 finite:13 truncated:2 supporting:1 e2p:1 situation:1 january:1 incorporated:1 ucsd:1 arbitrary:2 community:2 david:6 namely:4 pair:5 kl:1 learned:1 distinction:2 nip:1 trans:1 able:1 beyond:3 proceeds:2 below:1 pattern:1 beating:4 appeared:1 loophole:2 max:6 suitable:2 difficulty:1 rely:1 natural:1 indicator:1 turning:2 scheme:3 improve:1 kxk22:2 esaim:1 catch:1 text:1 nice:1 literature:1 prior:2 ep0:1 removal:1 asymptotic:1 interesting:2 triple:1 switched:1 sufficient:1 minp:1 pi:3 heavy:1 translation:1 echal:1 mn1:1 course:4 elsewhere:2 placed:3 supported:1 soon:1 english:1 side:1 aad:1 fall:1 face:1 correspondingly:1 van:2 superficially:1 concretely:4 made:2 refinement:1 san:1 author:1 far:9 approximate:1 compact:4 buc:1 global:2 instantiation:1 predictably:1 shwartz:1 spectrum:3 tailed:1 additionally:2 ignoring:1 excellent:1 necessarily:1 domain:1 diag:1 unboundedness:1 main:2 whole:1 edition:1 body:1 wiley:1 subgaussianity:2 shrinking:1 explicit:1 lie:3 invocation:1 candidate:1 clamped:1 theorem:13 down:1 bad:2 zu:4 rakhlin:2 decay:1 r2:15 dk:1 essential:1 exists:1 phd:1 budget:1 kx:18 clamping:6 entropy:1 simply:5 gao:2 highlighting:1 contained:1 inderjit:1 springer:4 minimizer:2 satisfies:6 swath:2 consequently:1 lipschitz:3 lemar:1 content:1 hard:6 typical:1 uniformly:2 lemma:7 max0:1 called:1 sanjoy:1 secondary:1 conservative:1 bernard:1 domination:1 support:4 latter:1 meant:1 alexander:1 violated:1 phenomenon:2 |
4,533 | 5,101 | Statistical Active Learning Algorithms
Vitaly Feldman
IBM Research - Almaden
[email protected]
Maria Florina Balcan
Georgia Institute of Technology
[email protected]
Abstract
We describe a framework for designing efficient active learning algorithms that are
tolerant to random classification noise and differentially-private. The framework
is based on active learning algorithms that are statistical in the sense that they rely
on estimates of expectations of functions of filtered random examples. It builds
on the powerful statistical query framework of Kearns [30].
We show that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to random classification noise as well as other forms of ?uncorrelated? noise. We show
that commonly studied concept classes including thresholds, rectangles, and linear separators can be efficiently actively learned in our framework. These results
combined with our generic conversion lead to the first computationally-efficient
algorithms for actively learning some of these concept classes in the presence of
random classification noise that provide exponential improvement in the dependence on the error over their passive counterparts. In addition, we show that our
algorithms can be automatically converted to efficient active differentially-private
algorithms. This leads to the first differentially-private active learning algorithms
with exponential label savings over the passive case.
1
Introduction
Most classic machine learning methods depend on the assumption that humans can annotate all the
data available for training. However, many modern machine learning applications have massive
amounts of unannotated or unlabeled data. As a consequence, there has been tremendous interest
both in machine learning and its application areas in designing algorithms that most efficiently utilize the available data, while minimizing the need for human intervention. An extensively used and
studied technique is active learning, where the algorithm is presented with a large pool of unlabeled
examples and can interactively ask for the labels of examples of its own choosing from the pool,
with the goal to drastically reduce labeling effort. This has been a major area of machine learning
research in the past decade [19], with several exciting developments on understanding its underlying
statistical principles [27, 18, 4, 3, 29, 21, 15, 7, 31, 10, 34, 6]. In particular, several general characterizations have been developed for describing when active learning can in principle have an advantage
over the classic passive supervised learning paradigm, and by how much. However, these efforts
were primarily focused on sample size bounds rather than computation, and as a result many of the
proposed algorithms are not computationally efficient. The situation is even worse in the presence
of noise where active learning appears to be particularly hard. In particular, prior to this work, there
were no known efficient active algorithms for concept classes of super-constant VC-dimension that
are provably robust to random and independent noise while giving improvements over the passive
case.
Our Results: We propose a framework for designing efficient (polynomial time) active learning
algorithms which is based on restricting the way in which examples (both labeled and unlabeled) are
accessed by the algorithm. These restricted algorithms can be easily simulated using active sampling
and, in addition, possess a number of other useful properties. The main property we will consider is
1
tolerance to random classification noise of rate ? (each label is flipped randomly and independently
with probability ? [1]). Further, as we will show, the algorithms are tolerant to other forms of noise
and can be simulated in a differentially-private way.
In our restriction, instead of access to random examples from some distribution P over X ? Y the
learning algorithm only gets ?active? estimates of the statistical properties of P in the following
sense. The algorithm can choose any filter function ?(x) : X ? [0, 1] and a query function ? :
X ? Y ? [?1, 1] for any ? and ?. For simplicity we can think of ? as an indicator function of
some set ?S ? X of ?informative? points and of ? as some useful property of the target function.
For this pair of functions the learning algorithm can get an estimate of E(x,y)?P [?(x, y) | x ? ?S ].
For ? and ?0 chosen by the algorithm the estimate is provided to within tolerance ? as long as
E(x,y)?P [x ? ?S ] ? ?0 (nothing is guaranteed otherwise). Here the inverse of ? corresponds to
the label complexity of the algorithm and the inverse of ?0 corresponds to its unlabeled sample
complexity. Such a query is referred to as active statistical query (SQ) and algorithms using active
SQs are referred to as active statistical algorithms.
Our framework builds on the classic statistical query (SQ) learning framework of Kearns [30] defined in the context of PAC learning model [35]. The SQ model is based on estimates of expectations
of functions of examples (but without the additional filter function) and was defined in order to design efficient noise tolerant algorithms in the PAC model. Despite the restrictive form, most of the
learning algorithms in the PAC model and other standard techniques in machine learning and statistics used for problems over distributions have SQ analogues [30, 12, 11, ?]1 . Further, statistical
algorithms enjoy additional properties: they can be simulated in a differentially-private way [11],
automatically parallelized on multi-core architectures [17] and have known information-theoretic
characterizations of query complexity [13, 26]. As we show, our framework inherits the strengths of
the SQ model while, as we will argue, capturing the power of active learning.
At a first glance being active and statistical appear to be incompatible requirements on the algorithm.
Active algorithms typically make label query decisions on the basis of examining individual samples
(for example as in binary search for learning a threshold or the algorithms in [27, 21, 22]). At the
same time statistical algorithms can only examine properties of the underlying distribution. But
there also exist a number of active learning algorithms that can be seen as applying passive learning
techniques to batches of examples that are obtained from querying labels of samples that satisfy the
same filter. These include the general A2 algorithm [4] and, for example, algorithms in [3, 20, 9, 8].
As we show, we can build on these techniques to provide algorithms that fit our framework.
We start by presenting a general reduction showing that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to
random classification noise as well as other forms of ?uncorrelated? noise. We then demonstrate the
generality of our framework by showing that the most commonly studied concept classes including thresholds, balanced rectangles, and homogenous linear separators can be efficiently actively
learned via active statistical algorithms. For these concept classes, we design efficient active learning algorithms that are statistical and provide the same exponential improvements in the dependence
on the error over passive learning as their non-statistical counterparts.
The primary problem we consider is active learning of homogeneous halfspaces, a problem that has
attracted a lot of interest in the theory of active learning [27, 18, 3, 9, 22, 16, 23, 8, 28]. We describe two algorithms for the problem. First, building on insights from margin based analysis [3, 8],
we give an active statistical learning algorithm for homogeneous halfspaces over all isotropic logconcave distributions, a wide class of distributions that includes many well-studied density functions
and has played an important role in several areas including sampling, optimization, and learning
[32]. Our algorithm for this setting proceeds in rounds; in round t we build a better approximation
wt to the target function by using a passive SQ learning algorithm (e.g., the one of [24]) over a
distribution Dt that is a mixture of distributions in which each component is the original distribution
conditioned on being within a certain distance from the hyperplane defined by previous approximations wi . To perform passive statistical queries relative to Dt we use active SQs with a corresponding
real valued filter. This algorithm is computationally efficient and uses only poly(d, log(1/)) active
statistical queries of tolerance inverse-polynomial in the dimension d and log(1/).
1
The sample complexity of the SQ analogues might increase sometimes though.
2
For the special case of the uniform distribution over the unit ball we give a new, simpler and substantially more efficient active statistical learning algorithm. Our algorithm is based on measuring the
error of a halfspace conditioned on being within some margin of that halfspace. We show that such
measurements performed on the perturbations of the current hypothesis along the d basis vectors
can be combined to derive a better hypothesis. This approach differs substantially from the previous
algorithms for this problem [3, 22].
? The algorithm is computationally efficient and uses d log(1/)
active SQs with tolerance of ?(1/ d) and filter tolerance of ?(1/).
These results, combined with our generic simulation of active statistical algorithms in the presence
of random classification noise (RCN) lead to the first known computationally efficient algorithms
for actively learning halfspaces which are RCN tolerant and give provable label savings over the
passive case. For the uniform distribution case this leads to an algorithm with sample complexity of
O((1 ? 2?)?2 ? d2 log(1/) log(d log(1/))) and for the general isotropic log-concave case we get
sample complexity of poly(d, log(1/), 1/(1 ? 2?)). This is worse than the sample complexity in
the noiseless case which is just O((d + log log(1/)) log(1/)) [8]. However, compared to passive
learning in the presence of RCN, our algorithms have exponentially better dependence on and essentially the same dependence on d and 1/(1 ? 2?). One issue with the generic simulation is that
it requires knowledge of ? (or an almost precise estimate). Standard approach to dealing with this
issue does not always work in the active setting and for our log-concave and the uniform distribution algorithms we give a specialized argument that preserves the exponential improvement in the
dependence on .
Differentially-private active learning: In many application of machine learning such as medical
and financial record analysis, data is both sensitive and expensive to label. However, to the best
of our knowledge, there are no formal results addressing both of these constraints. We address
the problem by defining a natural model of differentially-private active learning. In our model we
assume that a learner has full access to unlabeled portion of some database of n examples S ?
X ? Y which correspond to records of individual participants in the database. In addition, for
every element of the database S the learner can request the label of that element. As usual, the
goal is to minimize the number of label requests (such setup is referred to as pool-based active
learning [33]). In addition, we would like to preserve the differential privacy of the participants in
the database, a now-standard notion of privacy introduced in [25]. Informally speaking, an algorithm
is differentially private if adding any record to S (or removing a record from S) does not affect the
probability that any specific hypothesis will be output by the algorithm significantly.
As first shown by [11], SQ algorithms can be automatically translated into differentially-private
algorithms. Using a similar approach, we show that active SQ learning algorithms can be automatically transformed into differentially-private active learning algorithms. Using our active statistical
algorithms for halfspaces we obtain the first algorithms that are both differentially-private and give
exponential improvements in the dependence of label complexity on the accuracy parameter .
Additional related work: As we have mentioned, most prior theoretical work on active learning
focuses on either sample complexity bounds (without regard for efficiency) or the noiseless case.
For random classification noise in particular, [6] provides a sample complexity analysis based on
the splitting index that is optimal up to polylog factors and works for general concept classes and
distributions, but it is not computationally efficient. In addition, several works give active learning
algorithms with empirical evidence of robustness to certain types of noise [9, 28];
In [16, 23] online learning algorithms in the selective sampling framework are presented, where
labels must be actively queried before they are revealed. Under the assumption that the label conditional distribution is a linear function determined by a fixed target vector, they provide bounds
on the regret of the algorithm and on the number of labels it queries when faced with an adaptive
adversarial strategy of generating the instances. As pointed out in [23], these results can also be
converted to a distributional PAC setting where instances xt are drawn i.i.d. In this setting they
obtain exponential improvement in label complexity over passive learning. These interesting results
and techniques are not directly comparable to ours. Our framework is not restricted to halfspaces.
Another important difference is that (as pointed out in [28]) the exponential improvement they give
is not possible in the noiseless version of their setting. In other words, the addition of linear noise
defined by the target makes the problem easier for active sampling. By contrast RCN can only make
the classification task harder than in the realizable case.
3
Due to space constraint details of most proofs and further discussion appear in the full version of
this paper [5].
2
Active Statistical Algorithms
Let X be a domain and P be a distribution over labeled examples on X. We represent such a
distribution by a pair (D, ?) where D is the marginal distribution of P on X and ? : X ? [?1, 1]
is a function defined as ?(z) = E(x,`)?P [` | x = z]. We will be primarily considering learning in
the PAC model (realizable case) where ? is a boolean function, possibly corrupted by random noise.
When learning with respect to a distribution P = (D, ?), an active statistical learner has access to
active statistical queries. A query of this type is a pair of functions (?, ?), where ? : X ? [0, 1]
is the filter function which for a point x, specifies the probability with which the label of x should
be queried. The function ? : X ? {?1, 1} ? [?1, 1] is the query function and depends on both
point and the label. The filter function ? defines the distribution D conditioned on ? as follows:
for each x the density function D|? (x) is defined as D|? (x) = D(x)?(x)/ED [?(x)]. Note that if
? is an indicator function of some set S then D|? is exactly D conditioned on x being in S. Let
P|? denote the conditioned distribution (D|? , ?). In addition, a query has two tolerance parameters:
filter tolerance ?0 and query tolerance ? . In response to such a query the algorithm obtains a value
? such that if ED [?(x)] ? ?0 then
? ? EP [?(x, `)] ? ?
|?
(and nothing is guaranteed when ED [?(x)] < ?0 ).
An active statistical learning algorithm can also ask target-independent queries with tolerance ?
which are just queries over unlabeled samples. That is for a query ? : X ? [?1, 1] the algorithm
obtains a value ?, such that |? ? ED [?(x)]| ? ? . Such queries are not necessary when D is
known to the learner. Also for the purposes of obtaining noise tolerant algorithms one can relax the
requirements of model and give the learning algorithm access to unlabelled samples.
Our definition generalizes the statistical query framework of Kearns [30] which does not include
filtering function, in other words a query is just a function ? : X ? {?1, 1} ? [?1, 1] and it has a
single tolerance parameter ? . By definition, an active SQ (?, ?) with tolerance ? relative to P is the
same as a passive statistical query ? with tolerance ? relative to the distribution P|? . In particular, a
(passive) SQ is equivalent to an active SQ with filter ? ? 1 and filter tolerance 1.
We note that from the definition of active SQ we can see that
EP|? [?(x, `)] = EP [?(x, `) ? ?(x)]/EP [?(x)].
This implies that an active statistical query can be estimated using two passive statistical queries.
However to estimate EP|? [?(x, `)] with tolerance ? one needs to estimate EP [?(x, `) ? ?(x)] with
tolerance ? ? EP [?(x)] which can be much lower than ? . Tolerance of a SQ directly corresponds to
the number of examples needed to evaluate it and therefore simulating active SQs passively might
require many more labeled examples.
2.1
Simulating Active Statistical Queries
We first note that a valid response to a target-independent query with tolerance ? can be obtained,
with probability at least 1 ? ?, using O(? ?2 log (1/?)) unlabeled samples.
A natural way of simulating an active SQ is by filtering points drawn randomly from D: draw a
random point x, let B be drawn from Bernoulli distribution with probability of success ?(x); ask
for the label of x when B = 1. The points for which we ask for a label are distributed according to
D|? . This implies that the empirical average of ?(x, `) on O(? ?2 log (1/?)) labeled examples will
then give ?. Formally we get the following theorem.
Theorem 2.1. Let P = (D, ?) be a distribution over X ? {?1, 1}. There exists an active sampling
algorithm that given functions ? : X ? [0, 1], ? : X ? {?1, 1} ? [?1, 1], values ?0 > 0,
? > 0, ? > 0, and access to samples from P , with probability at least 1 ? ?, outputs a valid
response to active statistical query (?, ?) with tolerance parameters (?0 , ? ). The algorithm uses
O(? ?2 log (1/?)) labeled examples from P and O(?0?1 ? ?2 log (1/?)) unlabeled samples from D.
4
A direct way to simulate all the queries of an active SQ algorithm is to estimate the response to each
query using fresh samples and use the union bound to ensure that, with probability at least 1 ? ?, all
queries are answered correctly. Such direct simulation of an algorithm that uses at most q queries can
be done using O(q? ?2 log(q/?)) labeled examples and O(q?0?1 ? ?2 log (q/?)) unlabeled samples.
However in many cases a more careful analysis can be used to reduce the sample complexity of
simulation.
Labeled examples can be shared to simulate queries that use the same filter ? and do not depend on
each other. This implies that the sample size sufficient for simulating q non-adaptive queries with the
same filter scales logarithmically with q. More generally, given a set of q query functions (possibly
chosen adaptively) which belong to some set Q of low complexity (such as VC dimension) one can
reduce the sample complexity of estimating the answers to all q queries (with the same filter) by
invoking the standard bounds based on uniform convergence (e.g. [14]).
2.2
Noise tolerance
An important property of the simulation described in Theorem 2.1 is that it can be easily adapted
to the case when the labels are corrupted by random classification noise [1]. For a distribution
P = (D, ?) let P ? denote the distribution P with the label flipped with probability ? randomly and
independently of an example. It is easy to see that P ? = (D, (1 ? 2?)?). We now show that, as in
the SQ model [30], active statistical queries can be simulated given examples from P ? .
Theorem 2.2. Let P = (D, ?) be a distribution over examples and let ? ? [0, 1/2) be a noise
rate. There exists an active sampling algorithm that given functions ? : X ? [0, 1], ? : X ?
{?1, 1} ? [?1, 1], values ?, ?0 > 0, ? > 0, ? > 0, and access to samples from P ? , with
probability at least 1 ? ?, outputs a valid response to active statistical query (?, ?) with tolerance
parameters (?0 , ? ). The algorithm uses O(? ?2 (1 ? 2?)?2 log (1/?)) labeled examples from P ? and
O(?0?1 ? ?2 (1 ? 2?)?2 log (1/?)) unlabeled samples from D.
Note that the sample complexity of the resulting active sampling algorithm has informationtheoretically optimal quadratic dependence on 1/(1 ? 2?), where ? is the noise rate.
Remark 2.3. This simulation assumes that ? is given to the algorithm exactly. It is easy to see from
1?2?
the proof, that any value ? 0 such that 1?2?
0 ? [1 ? ? /4, 1 + ? /4] can be used in place of ? (with
1
? [ (?(x, 1) ? ?(x, ?1)) ? `] set to (1 ? 2?)? /4). In some learning
the tolerance of estimating EP|?
2
scenarios even an approximate value of ? is not known but it is known that ? ? ?0 < 1/2. To address
this issue one can construct a sequence ?1 , . . . , ?k of guesses of ?, run the learning algorithm with
each of those guesses in place of the true ? and let h1 , . . . , hk be the resulting hypotheses [30]. One
can then return the hypothesis hi among those that has the best agreement with a suitably large
sample. It is not hard to see that k = O(? ?1 ? log(1/(1 ? 2?0 ))) guesses will suffice for this strategy
to work [2].
Passive hypothesis testing requires ?(1/) labeled examples and might be too expensive to be used
with active learning algorithms. It is unclear if there exists a general approach for dealing with
unknown ? in the active learning setting that does not increase substantially the labelled example
complexity. However, as we will demonstrate, in the context of specific active learning algorithms
variants of this approach can be used to solve the problem.
We now show that more general types of noise can be tolerated as long as they are ?uncorrelated?
with the queries and the target function. Namely, we represent label noise using a function ? : X ?
[0, 1], where ?(x) gives the probability that the label of x is flipped. The rate of ? when learning
with respect to marginal distribution D over X is ED [?(x)]. For a distribution P = (D, ?) over
examples, we denote by P ? the distribution P corrupted by label noise ?. It is easy to see that
P ? = (D, ? ? (1 ? 2?)). Intuitively, ? is ?uncorrelated? with a query if the way that ? deviates
from its rate is almost orthogonal to the query on the target distribution.
Definition 2.4. Let P = (D, ?) be a distribution over examples and ? 0 > 0. For functions ? :
X ? [0, 1], ? : X ? {?1, 1} ? [?1, 1], we say that a noise function ? : X ? [0, 1] is (?, ? 0 )uncorrelated with ? and ? over P if,
?(x, 1) ? ?(x, ?1)
ED
?(x) ? (1 ? 2(?(x) ? ?)) ? ? 0 .
|?
2
5
In this definition (1 ? 2(?(x) ? ?)) is the expectation of {?1, 1} coin that is flipped with probability
?(x) ? ?, whereas (?(x, 1) ? ?(x, ?1))?(x) is the part of the query which measures the correlation
with the label. We now give an analogue of Theorem 2.2 for this more general setting.
Theorem 2.5. Let P = (D, ?) be a distribution over examples, ? : X ? [0, 1], ? : X ? {?1, 1} ?
[?1, 1] be a query and a filter functions, ? ? [0, 1/2), ? > 0 and ? be a noise function that is
(?, (1 ? 2?)? /4)-uncorrelated with ? and ? over P . There exists an active sampling algorithm that
given functions ? and ?, values ?, ?0 > 0, ? > 0, ? > 0, and access to samples from P ? , with
probability at least 1 ? ?, outputs a valid response to active statistical query (?, ?) with tolerance
parameters (?0 , ? ). The algorithm uses O(? ?2 (1 ? 2?)?2 log (1/?)) labeled examples from P ? and
O(?0?1 ? ?2 (1 ? 2?)?2 log (1/?)) unlabeled samples from D.
An immediate implication of Theorem 2.5 is that one can simulate an active SQ algorithm A using
examples corrupted by noise ? as long as ? is (?, (1 ? 2?)? /4)-uncorrelated with all A?s queries of
tolerance ? for some fixed ?.
2.3
Simple examples
Thresholds: We show that a classic example of active learning a threshold function on an interval
can be easily expressed using active SQs. For simplicity and without loss of generality we can
assume that the interval is [0, 1] and the distribution is uniform over it (as usual, we can bring the
distribution to be close enough to this form using unlabeled samples or target-independent queries).
Assume that we know that the threshold ? belongs to the interval [a, b] ? [0, 1]. We ask a query
?(x, `) = (`+1)/2 with filter ?(x) which is the indicator function of the interval [a, b] with tolerance
1/4 and filter tolerance b ? a. Let v be the response to the query. By definition, E[?(x)] = b ? a
and therefore we have that |v ? E[?(x, `) | x ? [a, b]]| ? 1/4. Note that,
E[?(x, `) | x ? [a, b]] = (b ? ?)/(b ? a) .
We can therefore conclude that (b ? ?)/(b ? a) ? [v ? 1/4, v + 1/4] which means that ? ?
[b ? (v + 1/4)(b ? a), b ? (v ? 1/4)(b ? a)] ? [a, b]. Note that the length of this interval is at most
(b ? a)/2. This means that after at most log2 (1/) + 1 iterations we will reach an interval [a, b]
of length at most . In each iteration only constant 1/4 tolerance is necessary and filter tolerance is
never below . A direct simulation of this algorithm can be done using log(1/) ? log(log(1/)/?)
?
labeled examples and O(1/)
? log(1/?) unlabeled samples.
Learning of thresholds can also be easily used to obtain a simple algorithm for learning axis-aligned
rectangles whose weight under the target distribution is not too small.
A2 : We now note that the general and well-studied A2 algorithm of [4] falls naturally into our
framework. At a high level, the A2 algorithm is an iterative, disagreement-based active learning
algorithm. It maintains a set of surviving classifiers Ci ? C, and in each round the algorithm asks
for the labels of a few random points that fall in the current region of disagreement of the surviving
classifiers. Formally, the region of disagreement DIS(Ci ) of a set of classifiers Ci is the of set of
instances x such that for each x ? DIS(Ci ) there exist two classifiers f, g ? Ci that disagree about
the label of x. Based on the queried labels, the algorithm then eliminates hypotheses that were
still under consideration, but only if it is statistically confident (given the labels queried in the last
round) that they are suboptimal. In essence, in each round A2 only needs to estimate the error rates
(of hypotheses still under consideration) under the conditional distribution of being in the region of
disagreement. This can be easily done via active statistical queries. Note that while the number of
active statistical queries needed to do this could be large, the number of labeled examples needed
to simulate these queries is essentially the same as the number of labeled examples needed by the
known A2 analyses [29]. While in general the required computation of the disagreement region
and manipulations of the hypothesis space cannot be done efficiently, efficient implementation is
possible in a number of simple cases such as when the VC dimension of the concept class is a
constant. It is not hard to see that in these cases the implementation can also be done using a
statistical algorithm.
6
3
Learning of halfspaces
In this section we outline our reduction from active learning to passive learning of homogeneous
linear separators based on the analysis of Balcan and Long [8]. Combining it with the SQ learning
algorithm for halfspaces by Dunagan and Vempala [24], we obtain the first efficient noise-tolerant
active learning of homogeneous halfspaces for any isotropic log-concave distribution. One of the
key point of this result is that it is relatively easy to harness the involved results developed for SQ
framework to obtain new active statistical algorithms.
Let Hd denote the concept class of all homogeneous halfspaces. Recall that a distribution over Rd
is log-concave if log f (?) is concave, where f is its associated density function. It is isotropic if its
mean is the origin and its covariance matrix is the identity. Log-concave distributions form a broad
class of distributions: for example, the Gaussian, Logistic, Exponential, and uniform distribution
over any convex set are log-concave distributions. Using results in [24] and properties of log-concave
distributions, we can show:
Theorem 3.1. There exists a SQ algorithm LearnHS that learns Hd to accuracy 1 ? over any
distribution D|? , where D is an isotropic log-concave distribution and ? : Rd ? [0, 1] is a filter
function. Further LearnHS outputs a homogeneous halfspace, runs in time polynomial in d,1/
and log(1/?) and uses SQs of tolerance ? 1/poly(d, 1/, log(1/?)), where ? = ED [?(x)].
We now state the properties of our new algorithm formally.
Theorem 3.2. There exists an active SQ algorithm ActiveLearnHS-LogC (Algorithm 1) that
for any isotropic log-concave distribution D on Rd , learns Hd over D to accuracy 1 ? in time
poly(d, log(1/)) and using active SQs of tolerance ? 1/poly(d, log(1/)) and filter tolerance ?().
Algorithm 1 ActiveLearnHS-LogC: Active SQ learning of homogeneous halfspaces over
isotropic log-concave densities
1: %% Constants c, C1 , C2 and C3 are determined by the analysis.
2: Run LearnHS with error C2 to obtain w0 .
3: for k = 1 to s = dlog2 (1/(c))e do
4:
Let bk?1 = C1 /2k?1
5:
Let ?k equal
Pthe indicator function of being within margin bk?1 of wk?1
6:
Let ?k = ( i?k ?i )/k
7:
Run LearnHS over Dk = D|?k with error C2 /k by using active queries with filter ?k and
filter tolerance C3 to obtain wk
8: end for
9: return ws
We remark that, as usual, we can first bring the distribution to an isotropic position by using target
independent queries to estimate the mean and the covariance matrix of the distribution. Therefore
our algorithm can be used to learn halfspaces over general log-concave densities as long as the target
halfspace passes through the mean of the density.
We can now apply Theorem 2.2 (or more generally Theorem 2.5) to obtain an efficient active learning algorithm for homogeneous halfspaces over log-concave densities in the presence of random
classification noise of known rate. Further since our algorithm relies on LearnHS which can also
be simulated when the noise rate is unknown (see Remark 2.3) we obtain an active algorithm which
does not require the knowledge of the noise rate.
Corollary 3.3. There exists a polynomial-time active learning algorithm that for any ? ? [0, 1/2),
learns Hd over any log-concave distributions with random classification noise of rate ? to error
using poly(d, log(1/), 1/(1 ? 2?)) labeled examples and a polynomial number of unlabeled samples.
For the special case of the uniform distribution on the unit sphere (or, equivalently for our purposes,
unit ball) we give a substantially simpler and more efficient algorithm in terms of both sample and
computational complexity. This setting was previously studied in [3, 22]. The detailed presentation
of the technical ideas appears in the full version of the paper [5].
7
Theorem 3.4. There exists an active SQ algorithm ActiveLearnHS-U that learns Hd over
the uniform distribution on the (d ? 1)-dimensional
unit sphere to accuracy 1 ? , uses (d +
?
1) log(1/) active SQs with tolerance of ?(1/ d) and filter tolerance of ?(1/) and runs in time
d ? poly(log (d/)).
4
Differentially-private active learning
In this section we show that active SQ learning algorithms can also be used to obtain differentially
private active learning algorithms. Formally, for some domain X ? Y , we will call S ? X ? Y a
database. Databases S, S 0 ? X ?Y are adjacent if one can be obtained from the other by modifying
a single element. Here we will always have Y = {?1, 1}. In the following, A is an algorithm that
takes as input a database D and outputs an element of some finite set R.
Definition 4.1 (Differential privacy [25]). A (randomized) algorithm A : 2X?Y ? R is ?differentially private if for all r ? R and every pair of adjacent databases S, S 0 , we have
Pr[A(S) = r] ? e Pr[A(S 0 ) = r].
Here we consider algorithms that operate on S in an active way. That is the learning algorithm
receives the unlabeled part of each point in S as an input and can only obtain the label of a point
upon request. The total number of requests is the label complexity of the algorithm.
Theorem 4.2. Let A be an algorithm that learns a class of functions H to accuracy 1 ? over distribution D using M1 active SQs of tolerance ? and filter tolerance ?0 and M2 target-independent
queries of tolerance ?u . There exists a learning algorithm A0 that given ? > 0, ? > 0 and ac1
tive access to database S ? X ? {?1, 1} is ?-differentially private and uses at most O([ M
?? +
M1
M1
M1
M2
M2
? 2 ] log(M1 /?)) labels. Further, for some n = O([ ??0 ? + ?0 ? 2 + ??u + ?u2 ] log((M1 + M2 )/?)),
if S consists of at least n examples drawn randomly from D then with, probability at least 1 ? ?, A0
outputs a hypothesis with accuracy ? 1 ? (relative to distribution D). The running time of A0 is
the same as the running time of A plus O(n).
An immediate consequence of Theorem 4.2 is that for learning of homogeneous halfspaces over
uniform or log-concave distributions we can obtain differential privacy while essentially preserving
the label complexity. For example, by combining Theorems 4.2 and 3.4, we can efficiently and
differentially-privately learn homogeneous halfspaces?under the uniform distribution with privacy
parameter ? and error parameter by using only O(d d log(1/))/? + d2 log(1/)) labels. However, it is known that any
passive learning algorithm, even ignoring
? privacy considerations and noise
requires ? d + 1 log 1? labeled examples. So for ? ? 1/ d and small enough we get better
label complexity.
5
Discussion
Our work suggests that, as in passive learning, active statistical algorithms might be essentially
as powerful as example-based efficient active learning algorithms. It would be interesting to find
more general evidence supporting this claim or, alternatively, a counterexample. A nice aspect of
(passive) statistical learning algorithms is that it is possible to prove unconditional lower bounds on
such algorithms using SQ dimension [13] and its extensions. It would be interesting to develop an
active analogue of these techniques and give meaningful lower bounds based on them.
Acknowledgments We thank Avrim Blum and Santosh Vempala for useful discussions. This work
was supported in part by NSF grants CCF-0953192 and CCF-1101215, AFOSR grant FA9550-091-0538, ONR grant N00014-09-1-0751, and a Microsoft Research Faculty Fellowship.
References
[1] D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2:343?370, 1988.
[2] J. Aslam and S. Decatur. Specification and simulation of statistical query algorithms for efficiency and
noise tolerance. JCSS, 56:191?208, 1998.
[3] M. Balcan, A. Broder, and T. Zhang. Margin based active learning. In COLT, pages 35?50, 2007.
8
[4] M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, 2006.
[5] M. F. Balcan and V. Feldman. Statistical active learning algorithms, 2013. ArXiv:1307.3102.
[6] M.-F. Balcan and S. Hanneke. Robust interactive learning. In COLT, 2012.
[7] M.-F. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In COLT,
2008.
[8] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. JMLR - COLT proceedings (to appear), 2013.
[9] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML, pages
49?56, 2009.
[10] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In
NIPS, 2010.
[11] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SuLQ framework. In Proceedings
of PODS, pages 128?138, 2005.
[12] A. Blum, A. Frieze, R. Kannan, and S. Vempala. A polynomial time algorithm for learning noisy linear
threshold functions. Algorithmica, 22(1/2):35?52, 1997.
[13] A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and S. Rudich. Weakly learning DNF and characterizing statistical query learning using Fourier analysis. In STOC, pages 253?262, 1994.
[14] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. Learnability and the Vapnik-Chervonenkis
dimension. Journal of the ACM, 36(4):929?965, 1989.
[15] R. Castro and R. Nowak. Minimax bounds for active learning. In COLT, 2007.
[16] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Learning noisy linear classifiers via adaptive and selective
sampling. Machine Learning, 2010.
[17] C. Chu, S. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, and K. Olukotun. Map-reduce for machine learning on
multicore. In Proceedings of NIPS, pages 281?288, 2006.
[18] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, volume 18, 2005.
[19] S. Dasgupta. Active learning. Encyclopedia of Machine Learning, 2011.
[20] S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In ICML, pages 208?215, 2008.
[21] S. Dasgupta, D.J. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. NIPS, 20, 2007.
[22] S. Dasgupta, A. Tauman Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal
of Machine Learning Research, 10:281?299, 2009.
[23] O. Dekel, C. Gentile, and K. Sridharan. Selective sampling and active learning from single and multiple
teachers. JMLR, 2012.
[24] J. Dunagan and S. Vempala. A simple polynomial-time rescaling algorithm for solving linear programs.
In STOC, pages 315?320, 2004.
[25] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In TCC, pages 265?284, 2006.
[26] V. Feldman. A complete characterization of statistical query learning with applications to evolvability.
Journal of Computer System Sciences, 78(5):1444?1459, 2012.
[27] Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee
algorithm. Machine Learning, 28(2-3):133?168, 1997.
[28] A. Gonen, S. Sabato, and S. Shalev-Shwartz. Efficient pool-based active learning of halfspaces. In ICML,
2013.
[29] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
[30] M. Kearns. Efficient noise-tolerant learning from statistical queries. JACM, 45(6):983?1006, 1998.
[31] V. Koltchinskii. Rademacher complexities and bounding the excess risk in active learning. JMLR,
11:2457?2485, 2010.
[32] L. Lov?asz and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random
Struct. Algorithms, 30(3):307?358, 2007.
[33] A. McCallum and K. Nigam. Employing EM in pool-based active learning for text classification. In
ICML, pages 350?358, 1998.
[34] M. Raginsky and A. Rakhlin. Lower bounds for passive and active learning. In NIPS, pages 1026?1034,
2011.
[35] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
9
| 5101 |@word private:16 faculty:1 version:3 polynomial:7 suitably:1 dekel:1 d2:2 simulation:8 covariance:2 invoking:1 asks:1 harder:1 reduction:2 chervonenkis:1 ours:1 past:1 current:2 beygelzimer:3 chu:1 attracted:1 must:1 informative:1 guess:3 warmuth:1 isotropic:8 mccallum:1 smith:1 core:1 record:4 fa9550:1 filtered:1 characterization:3 provides:1 coarse:1 simpler:2 accessed:1 zhang:2 along:1 c2:3 direct:3 differential:3 consists:1 prove:1 privacy:7 lov:1 examine:1 multi:1 automatically:6 considering:1 provided:1 estimating:2 underlying:2 suffice:1 agnostic:4 substantially:4 developed:2 every:2 concave:16 interactive:1 exactly:2 classifier:5 unit:4 medical:1 intervention:1 enjoy:1 appear:3 grant:3 before:1 consequence:2 despite:1 might:4 plus:1 koltchinskii:1 studied:6 suggests:1 statistically:1 acknowledgment:1 practical:1 testing:1 union:1 regret:1 differs:1 sq:26 area:3 empirical:2 significantly:1 word:2 get:5 cannot:1 unlabeled:15 close:1 context:2 applying:1 risk:1 restriction:1 equivalent:1 map:1 independently:2 convex:1 focused:1 pod:1 simplicity:2 splitting:1 m2:4 insight:1 haussler:1 jackson:1 financial:1 hd:5 classic:4 notion:1 target:13 shamir:1 massive:1 homogeneous:10 us:9 designing:3 hypothesis:10 origin:1 agreement:1 harvard:1 element:4 logarithmically:1 expensive:2 particularly:1 ac1:1 distributional:1 labeled:15 database:9 ep:8 role:1 jcss:1 region:4 halfspaces:15 balanced:1 mentioned:1 complexity:24 seung:1 depend:2 weakly:1 solving:1 upon:1 efficiency:2 learner:4 basis:2 translated:1 easily:5 describe:2 dnf:1 query:59 labeling:1 choosing:1 shalev:1 whose:1 valued:1 solve:1 say:1 relax:1 otherwise:1 statistic:1 think:1 noisy:3 laird:1 online:1 advantage:1 sequence:1 propose:1 tcc:1 aligned:1 combining:2 pthe:1 differentially:16 convergence:1 requirement:2 rademacher:1 generating:1 derive:1 polylog:1 develop:1 multicore:1 implies:3 filter:23 modifying:1 vc:3 human:2 require:2 extension:1 claim:1 furst:1 major:1 a2:6 purpose:2 label:36 sensitive:1 weighted:1 always:2 gaussian:1 super:1 rather:1 kalai:1 gatech:1 corollary:1 inherits:1 focus:1 maria:1 improvement:7 bernoulli:1 hk:1 contrast:1 adversarial:1 kim:1 sense:2 realizable:2 typically:1 sulq:1 a0:3 w:1 transformed:1 selective:4 provably:1 issue:3 classification:12 among:1 colt:5 almaden:1 development:1 special:2 logc:2 homogenous:1 santosh:1 marginal:2 construct:1 saving:2 never:1 sampling:13 equal:1 ng:1 flipped:4 broad:1 yu:1 icml:6 primarily:2 few:1 modern:1 randomly:4 frieze:1 preserve:2 individual:2 algorithmica:1 geometry:1 microsoft:1 interest:2 bradski:1 dwork:2 mixture:1 unconditional:1 mcsherry:2 implication:1 nowak:1 necessary:2 orthogonal:1 theoretical:1 instance:3 boolean:1 measuring:1 addressing:1 uniform:10 examining:1 wortman:1 too:2 learnability:1 tishby:1 answer:1 teacher:1 corrupted:4 tolerated:1 combined:3 adaptively:1 confident:1 density:7 broder:1 sensitivity:1 randomized:1 pool:5 cesa:1 interactively:1 choose:1 possibly:2 worse:2 return:2 rescaling:1 actively:5 converted:4 wk:2 includes:1 satisfy:1 unannotated:1 depends:1 performed:1 h1:1 lot:1 aslam:1 portion:1 start:1 participant:2 maintains:1 halfspace:4 minimize:1 accuracy:6 efficiently:5 correspond:1 hanneke:3 cc:1 reach:1 monteleoni:2 ed:7 definition:7 ninamf:1 involved:1 naturally:1 proof:2 associated:1 hsu:3 ask:5 recall:1 knowledge:3 appears:2 dt:2 supervised:1 harness:1 response:7 done:5 though:1 generality:2 just:3 correlation:1 langford:3 receives:1 glance:1 defines:1 logistic:1 building:1 calibrating:1 concept:8 true:2 counterpart:2 ccf:2 ehrenfeucht:1 round:5 adjacent:2 essence:1 presenting:1 outline:1 theoretic:1 demonstrate:2 complete:1 bring:2 passive:21 balcan:8 consideration:3 rcn:4 specialized:1 exponentially:1 volume:1 belong:1 m1:6 measurement:1 feldman:3 queried:4 counterexample:1 rd:3 pointed:2 access:8 specification:1 own:1 belongs:1 scenario:1 manipulation:1 certain:2 n00014:1 zaniboni:1 binary:1 success:1 onr:1 seen:1 preserving:1 additional:3 gentile:2 parallelized:1 paradigm:1 full:3 multiple:1 technical:1 unlabelled:1 long:6 sphere:2 lin:1 post:1 variant:1 florina:1 noiseless:3 expectation:3 essentially:4 arxiv:1 annotate:1 sometimes:1 represent:2 iteration:2 c1:2 addition:7 whereas:1 fellowship:1 interval:6 sabato:1 eliminates:1 operate:1 posse:1 asz:1 pass:1 logconcave:2 vitaly:2 sridharan:1 surviving:2 call:1 presence:5 revealed:1 easy:4 enough:2 affect:1 fit:1 architecture:1 suboptimal:1 reduce:4 idea:1 effort:2 speaking:1 remark:3 useful:3 generally:2 detailed:1 informally:1 amount:1 encyclopedia:1 extensively:1 angluin:1 specifies:1 exist:2 nsf:1 estimated:1 correctly:1 dasgupta:6 key:1 threshold:8 blum:4 drawn:4 decatur:1 utilize:1 rectangle:3 olukotun:1 raginsky:1 run:5 inverse:3 powerful:2 place:2 almost:2 draw:1 incompatible:1 decision:1 comparable:1 capturing:1 bound:11 hi:1 guaranteed:2 played:1 quadratic:1 strength:1 adapted:1 constraint:3 aspect:1 simulate:4 argument:1 answered:1 fourier:1 passively:1 vempala:5 relatively:1 according:1 ball:2 request:4 em:1 wi:1 castro:1 intuitively:1 restricted:2 pr:2 computationally:6 previously:1 describing:1 committee:1 needed:4 know:1 end:1 available:2 generalizes:1 apply:1 hierarchical:1 generic:3 disagreement:5 simulating:4 batch:1 robustness:1 coin:1 struct:1 original:1 assumes:1 running:2 include:2 ensure:1 log2:1 giving:1 restrictive:1 build:4 strategy:2 primary:1 dependence:7 usual:3 rudich:1 unclear:1 distance:1 thank:1 simulated:5 w0:1 nissim:2 argue:1 provable:1 fresh:1 kannan:1 length:2 index:1 minimizing:1 equivalently:1 setup:1 stoc:2 design:2 implementation:2 unknown:2 perform:1 dunagan:2 conversion:1 disagree:1 bianchi:1 finite:1 supporting:1 immediate:2 situation:1 defining:1 communication:1 precise:1 mansour:1 perturbation:1 introduced:1 bk:2 pair:4 namely:1 required:1 c3:2 tive:1 learned:2 tremendous:1 nip:5 address:2 proceeds:1 below:1 gonen:1 program:1 including:3 analogue:4 power:1 natural:2 rely:1 indicator:4 minimax:1 technology:1 axis:1 faced:1 prior:2 understanding:1 deviate:1 nice:1 text:1 relative:4 afosr:1 freund:1 loss:1 interesting:3 filtering:2 querying:1 sufficient:1 exciting:1 principle:2 uncorrelated:7 ibm:1 supported:1 last:1 dis:2 drastically:1 formal:1 perceptron:1 institute:1 wide:1 fall:2 characterizing:1 tauman:1 tolerance:37 regard:1 distributed:1 dimension:6 valid:4 commonly:2 adaptive:3 employing:1 excess:1 approximate:1 obtains:2 dlog2:1 dealing:2 active:111 tolerant:9 conclude:1 shwartz:1 alternatively:1 search:1 iterative:1 decade:1 learn:2 robust:2 ignoring:1 obtaining:1 nigam:1 poly:7 separator:4 domain:2 main:1 privately:1 bounding:1 noise:36 nothing:2 referred:3 georgia:1 position:1 exponential:8 jmlr:3 learns:5 removing:1 theorem:15 specific:2 xt:1 pac:5 showing:2 learnable:1 rakhlin:1 dk:1 evidence:2 exists:9 restricting:1 adding:1 avrim:1 importance:1 ci:5 vapnik:1 valiant:1 conditioned:5 margin:4 easier:1 jacm:1 expressed:1 u2:1 corresponds:3 relies:1 acm:2 conditional:2 goal:2 identity:1 presentation:1 blumer:1 careful:1 labelled:1 shared:1 hard:3 determined:2 wt:1 hyperplane:1 kearns:5 total:1 meaningful:1 formally:4 evaluate:1 |
4,534 | 5,102 | Predictive PAC Learning and Process Decompositions
Aryeh Kontorovich
Computer Science Department
Ben Gurion University
Beer Sheva 84105 Israel
[email protected]
Cosma Rohilla Shalizi
Statistics Department
Carnegie Mellon University
Pittsburgh, PA 15213 USA
[email protected]
Abstract
We informally call a stochastic process learnable if it admits a generalization error
approaching zero in probability for any concept class with finite VC-dimension
(IID processes are the simplest example). A mixture of learnable processes need
not be learnable itself, and certainly its generalization error need not decay at the
same rate. In this paper, we argue that it is natural in predictive PAC to condition
not on the past observations but on the mixture component of the sample path.
This definition not only matches what a realistic learner might demand, but also
allows us to sidestep several otherwise grave problems in learning from dependent
data. In particular, we give a novel PAC generalization bound for mixtures of
learnable processes with a generalization error that is not worse than that of each
mixture component. We also provide a characterization of mixtures of absolutely
regular (?-mixing) processes, of independent probability-theoretic interest.
1
Introduction
Statistical learning theory, especially the theory of ?probably approximately correct? (PAC) learning, has mostly developed under the assumption that data are independent and identically distributed
(IID) samples from a fixed, though perhaps adversarially-chosen, distribution. As recently as 1997,
Vidyasagar [1] named extending learning theory to stochastic processes of dependent variables as a
major open problem. Since then, considerable progress has been made for specific classes of processes, particularly strongly-mixing sequences and exchangeable sequences. (Especially relevant
contributions, for our purposes, came from [2, 3] on exchangeability, from [4, 5] on absolute regularity1 , and from [6, 7] on ergodicity; others include [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].) Our goals
in this paper are to point out that many practically-important classes of stochastic processes possess
a special sort of structure, namely they are convex combinations of simpler, extremal processes.
This both demands something of a re-orientation of the goals of learning, and makes the new goals
vastly easier to attain than they might seem.
Main results Our main contribution is threefold: a conceptual definition of learning from non-IID
data (Definition 1) and a technical result establishing tight generalization bounds for mixtures of
learnable processes (Theorem 2), with a direct corollary about exchangeable sequences (Corollary
1), and an application to mixtures of absolutely regular sequences, for which we provide a new
characterization.
Notation X1 , X2 , . . . will be a sequence of dependent random variables taking values in a common
measurable space X , which we assume to be ?standard? [18, ch. 3] to avoid technicalities, implying
1
Absolutely regular processes are ones where the joint distribution of past and future events approaches
independence, in L1 , as the separation between events goes to infinity; see ?4 below for a precise statement and
extensive discussion. Absolutely regular sequences are now more commonly called ??-mixing?, but we use the
older name to avoid confusion with the other sort of ?mixing?.
1
that their ?-field has a countable generating basis; the reader will lose little if they think of X as
Rd . (We believe our ideas apply to stochastic processes with multidimensional index sets as well,
but use sequences here.) Xij will stand for the block (Xi , Xi+1 , . . . Xj?1 , Xj ). Generic infinitedimensional distributions, measures on X ? , will be ? or ?; these are probability laws for X1? . Any
such stochastic process can be represented through the shift map ? : X ? 7? X ? (which just drops
the first coordinate, (? x)t = xt+1 ), and a suitable distribution of initial conditions. When we speak
of a set being invariant, we mean invariant under the shift. The collection of all such probability
measures is itself a measurable space, and a generic measure there will be ?.
2
Process Decompositions
Since the time of de Finetti and von Neumann, an important theme of the theory of stochastic processes has been finding ways of representing complicated but structured processes, obeying certain
symmetries, as mixtures of simpler processes with the same symmetries, as well as (typically) some
sort of 0-1 law. (See, for instance, the beautiful paper by Dynkin [19], and the statistically-motivated
[20].) In von Neumann?s original ergodic decomposition [18, ?7.9], stationary processes, whose distributions are invariant over time, proved to be convex combinations of stationary ergodic measures,
ones where all invariant sets have either probability 0 or probability 1. In de Finetti?s theorem [21,
ch. 1], exchangeable sequences, which are invariant under permutation, are expressed as mixtures of
IID sequences2 . Similar results are now also known for asymptotically mean stationary sequences
[18, ?8.4], for partially-exchangeable sequences [22], for stationary random fields, and even for
infinite exchangeable arrays (including networks and graphs) [21, ch. 7].
The common structure shared by these decompositions is as follows.
1. The probability law ? of the composite but symmetric process is a convex combination of
the simpler, extremal processes ? ? M with the same symmetry. The infinite-dimensional
distribution of these extremal processes are, naturally, mutually singular.
2. Sample paths from the composite process are generated hierarchically, first by picking an
extremal process ? from M according to a some measure ? supported on M, and then
generating a sample path from ?. Symbolically,
X1?
? ? ?
|? ? ?
3. Each realization of the composite process therefore gives information about only a single
extremal process, as this is an invariant along each sample path.
3
Predictive PAC
These points raise subtle but key issues for PAC learning theory. Recall the IID case: random
variables X1 , X2 , . . . are all generated from a common distribution ?(1) , leading to an infinitedimensional process distribution ?. Against this, we have a class F of functions f . The goal in PAC
theory is to find a sample complexity function3 s(, ?, F, ?) such that, whenever n ? s,
n
!
1 X
P? sup
f (Xt ) ? E? [f ] ? ? ?
(1)
f ?F n
t=1
That is, PAC theory seeks finite-sample uniform law of large numbers for F.
Because of its importance, it will be convenient to abbreviate the supremum,
n
1 X
sup
f (Xt ) ? E? [f ] ? ?n
f ?F n
t=1
2
This is actually a special case of the ergodic decomposition [21, pp. 25?26].
Standard PAC is defined as distribution-free, but here we maintain the dependence on ? for consistency
with future notation.
3
2
using the letter ??? as a reminder that when this goes to zero, F is a Glivenko-Cantelli class (for ?).
?n is also a function of F and of ?, but we suppress this in the notation for brevity. We will also
pass over the important and intricate, but fundamentally technical, issue of establishing that ?n is
measurable (see [23] for a thorough treatment of this topic).
What one has in mind, of course, is that there is a space H of predictive models (classifiers, regressions, . . . ) h, and that F is the image of H through an appropriate loss function `, i.e., each f ? F
can be written as
f (x) = `(x, h(x))
for some h ? H. If ?n ? 0 in probability for this ?loss function? class, then empirical risk
P
minimization is reliable. That is, the function f?n which minimizes the empirical risk n?1 t f (Xt )
has an expected risk in the future which is close to the best attainable risk over all of F, R(F, ?) =
inf f ?F E? [f ]. Indeed, since when n ? s, with high (? 1 ? ?) probability all
have
h functions
i
?
empirical risks within of their true risks, with high probability the true risk E? fn is within 2
of R(F, ?). Although empirical risk minimization is not the only conceivable learning strategy,
it is, in a sense, a canonical one (computational considerations aside). The latter is an immediate
consequence of the VC-dimension characterization of PAC learnability:
Theorem 1 Suppose that the concept class H is PAC learnable from IID samples. Then H is learnable via empirical risk minimization.
P ROOF : Since H is PAC-learnable, it must necessarily have a finite VC-dimension [24]. But for
finite-dimensional H and IID samples, ?n ? 0 in probability (see [25] for a simple proof). This
implies that the empirical risk minimizer is a PAC learner for H.
In extending these ideas to non-IID processes, a subtle issue arises, concerning which expectation
value we would like empirical means to converge towards. In the IID case, because ? is simply the
infinite product of ?(1) and f is a function on X , we can without trouble identify expectations under
the two measures with each other, and with expectations conditional on the first n observations:
E? [f (X)] = E?(1) [f (X)] = E? [f (Xn+1 ) | X1n ]
Things are not so tidy when ? is the law of a dependent stochastic process.
In introducing a notion of ?predictive PAC learning?, Pestov [3], like Berti and Rigo [2] earlier,
proposes that the target should be the conditional expectation, in our notation E? [f (Xn+1 ) | X1n ].
This however presents two significant problems. First, in general there is no single value for this ?
it truly is a function of the past X1n , or at least some part of it. (Consider even the case of a binary
Markov chain.) The other, and related, problem with this idea of predictive PAC is that it presents
learning with a perpetually moving target. Whether the function which minimizes the empirical risk
is going to do well by this criterion involves rather arbitrary details of the process. To truly pursue
this approach, one would have to actually learn the predictive dependence structure of the process,
a quite formidable undertaking, though perhaps not hopeless [26].
Both of these complications are exacerbated when the process producing the data is actually a mixture over simpler processes, as we have seen is very common in interesting applied settings. This
is because, in addition to whatever dependence may be present within each extremal process, X1n
gives (partial) information about what that process is. Finding E? [Xn+1 | X1n ] amounts to first
finding all the individual conditional expectations, E? [Xn+1 | X1n ], and then averaging them with
respect to the posterior distribution ?(? | X1n ). This averaging over the posterior produces additional dependence between past and future. (See [27] on quantifying how much extra apparent
Shannon information this creates.) As we show in ?4 below, continuous mixtures of absolutely regular processes are far from being absolutely regular themselves. This makes it exceedingly hard,
if not impossible, to use a single sample path, no matter how long, to learn anything about global
expectations.
These difficulties all simply dissolve if we change the target distribution. What a learner should care
about is not averaging over some hypothetical prior distribution of inaccessible alternative dynamical
systems, but rather what will happen in the future of the particular realization which provided the
training data, and must continue to provide the testing data. To get a sense of what this is means,
3
notice that for an ergodic ?,
m
1 X
E [f (Xn+i ) | X1n ]
m?? m
i=1
E? [f ] = lim
(from [28, Cor. 4.4.1]). That is, matching expectations under the process measure ? means controlling the long-run average behavior, and not just the one-step-ahead expectation suggested in [3, 2].
Empirical risk minimization now makes sense: it is attempting to find a model which will work
well not just at the next step (which may be inherently unstable), but will continue to work well, on
average, indefinitely far into the future.
We are thus led to the following definition.
Definition 1 Let X1? be a stochastic process with law ?, and let I be the ?-field generated by all
the invariant events. We say that ? admits predictive PAC learning of a function class F when there
exists a sample-complexity function s(, ?, F, ?) such that, if n ? s, then
!
n
1 X
P? sup
f (Xt ) ? E? [f |I] ? ? ?
f ?F n
t=1
A class of processes P admits of distribution-free predictive PAC learning if there exists a common
sample-complexity function for all ? ? P, in which case we write s(, ?, F, ?) = s(, ?, F, P).
As is well-known, distribution-free predictive PAC learning, in this sense, is possible for IID processes (F must have finite VC dimension). For an ergodic ?, [6] shows that s(, ?, F, ?) exist and is
finite if and only, once again, F has a finite VC dimension; this implies predictive PAC learning, but
not distribution-free predictive PAC. Since ergodic processes can converge arbitrarily slowly, some
extra condition must be imposed on P to ensure that dependence decays fast enough for each ?. A
sufficient restriction is that all processes in P be stationary and absolutely regular (?-mixing), with
a common upper bound on the ? dependence coefficients, as [5, 14] show how to turn algorithms
which are PAC on IID data into ones where are PAC on such sequences, with a penalty in extra sample complexity depending on ? only through the rate of decay of correlations4 . We may apply these
familiar results straightforwardly, because, when ? is ergodic, all invariant sets have either measure
0 or measure 1, conditioning on I has no effect, and E? [f | I] = E? [f ].
Our central result is now almost obvious.
Theorem 2 Suppose that distribution-free prediction PAC learning is possible for a class of functions F and a class of processes M, with sample-complexity function s(, ?, F, P). Then the class
of processes P formed by taking convex mixtures from M admits distribution-free PAC learning
with the same sample complexity function.
P ROOF : Suppose that n ? s(, ?, F). Then by the law of total expectation,
P? (?n ? )
= E? [P? (?n ? | ?)]
= E? [P? (?n ? )]
? E? [?] = ?
(2)
(3)
(4)
In words, if the same bound holds for each component of the mixture, then it still holds after averaging over the mixture. It is important here that we are only attempting to predict the long-run average
risk along the continuation of the same sample path as that which provided the training data; with
this as our goal, almost all sample paths looks like ? indeed, are ? realizations of single components of the mixture, and so the bound for extremal processes applies directly to them5 . By contrast,
there may be no distribution-free bounds at all if one does not condition on I.
4
We suspect that similar results could be derived for many of the weak dependence conditions of [29].
After formulating this idea, we came across a remarkable paper by Wiener [30], where he presents a qualitative version of highly similar considerations, using the ergodic decomposition to argue that a full dynamical
model of the weather is neither necessary nor even helpful for meteorological forecasting. The same paper
also lays out the idea of sensitive dependence on initial conditions, and the kernel trick of turning nonlinear
problems into linear ones by projecting into infinite-dimensional feature spaces.
5
4
A useful consequence of this innocent-looking result is that it lets us immediately apply PAC learning
results for extremal processes to composite processes, without any penalty in the sample complexity.
For instance, we have the following corollary:
Corollary 1 Let F have finite VC dimension, and have distribution-free sample complexity
s(, ?, F, M) for all IID measures ? ? P. Then the class M of exchangeable processes composed
from P admit distribution-free PAC learning with the same sample complexity,
s(, ?, F, P) = s(, ?, F, M)
This is in contrast with, for instance, the results obtained by [2, 3], which both go from IID sequences (laws in P) to exchangeable ones (laws in M) at the cost of considerably increased sample
complexity. The easiest point of comparison is with Theorem 4.2 of [3], which, in our notation,
shows that
s(, ?, M) ? s(/2, ?, P)
That we pay no such penalty in sample complexity because our target of learning is E? [f | I], not
E? [f | X1n ]. This means we do not have to use data points to narrow the posterior distribution
?(? | X1n ), and that we can give a much more direct argument.
In [3], Pestov asks whether ?one [can] maintain the initial sample complexity? when going from
IID to exchangeable sequences; the answer is yes, if one picks the right target. This holds whenever
the data-generating process is a mixture of extremal processes for which learning is possible. A
particularly important special case of this are the absolutely regular processes.
4
Mixtures of Absolutely Regular Processes
Roughly speaking, an absolutely regular process is one which is asymptotically independent in
a particular sense, where the joint distribution between past and future events approaches, in L1 ,
the product of the marginal distributions as the time-lag between past and future grows. These are
particularly important for PAC learning, since much of the existing IID learning theory translates
directly to this setting.
?
be a two-sided6 stationary process. The ?-dependence coefficient at lag k is
To be precise, let X??
0
?(k) ?
P??
? Pk? ? P?(1:k)
TV
(5)
0
and Xk? , the total variation distance between the
where P?(1:k) is the joint distribution of X??
actual joint distribution of the past and future, and the product of their marginals. Equivalently
[31, 32]
"
#
0
?(k) = E
sup P B | X?? ? P (B)
(6)
B??(Xk? )
which, roughly, is the expected total variation distance between the marginal distribution of the
future and its distribution conditional on the past.
As is well known, ?(k) is non-increasing in k, and of course ? 0, so ?(k) must have a limit as
k ? ?; it will be convenient to abbreviate this as ?(?). When ?(?) = 0, the process is said to
be beta mixing, or absolutely regular. All absolutely regular processes are also ergodic [32].
The importance of absolutely regular processes for learning comes essentially from a result due to
Yu [4]. Let X1n be a part of a sample path from an absolutely regular process ?, whose dependence
coefficients are ?(k). Fix integers m and a such that 2ma = n, so that the sequence is divided into
2m contiguous blocks of length a, and define ?(m,a) to be distribution of m length-a blocks. (That
is, ?(m,a) approximates ? by IID blocks.) Then |?(C) ? ?(m,a) (C)| ? (m ? 1)?(a) [4, Lemma
4.1]. Since in particular the event C could be taken to be {?n > }, this approximation result allows
distribution-free learning bounds for IID processes to translate directly into distribution-free learning
bounds for absolutely regular processes with bounded ? coefficients.
6
We have worked with one-sided processes so far, but the devices for moving between the two representations are standard, and this definition is more easily stated in its two-sided version.
5
If M contains only absolutely regular processes, then a measure ? on M creates a ? which is a
mixture of absolutely regular processes, or a MAR process. It is easy to see that absolute regularity
of the component processes (?? (k) ? 0) does not imply absolute regularity of the mixture processes
(?? (k) 6? 0). To see this, suppose that M consists of two processes ?0 , which puts unit probability
mass on the sequence of all zeros, and ?1 , which puts unit probability on the sequence of all ones;
and that ? gives these two equal probability. Then ??i (k) = 0 for both i, but the past and the future
of ? are not independent of each other.
More generally, suppose ? is supported on just two absolutely regular processes, ? and ?0 . For each
such ?, there exists a set of typical sequences T? ? X ? , in which the infinite sample path of ? lies
almost surely7 , and these sets do not overlap8 , T? ? T?0 = ?. This implies that ?(T? ) = ?(?), but
0
1 X??
? T?
0
?(T? | X??
)=
0 otherwise
Thus the change in probability of T? due to conditioning on the past is ?(?1 ) if the selected component was ?2 , and 1 ? ?(?1 ) = ?(?2 ) if the selected component is ?1 . We can reason in parallel for
T?2 , and so the average change in probability is
?1 (1 ? ?1 ) + ?2 (1 ? ?2 ) = 2?1 (1 ? ?1 )
and this must be ?? (?). Similar reasoning when ? is supported on q extremal processes shows
q
X
?? (?) =
?i (1 ? ?i )
i=1
so that the general case is
Z
?? (?) =
[1 ? ?({?})]d?(?)
This implies that if ? has no atoms, ?? (?) = 1 always. Since ?? (k) is non-increasing, ?? (k) = 1
for all k, for a continuous mixture of absolutely regular processes. Conceptually, this arises because
of the use of infinite-dimensional distributions for both past and future in the definition of the ?dependence coefficient. Having seen an infinite past is sufficient, for an ergodic process, to identify
the process, and of course the future must be a continuation of this past.
MARs thus display a rather odd separation between the properties of individual sample paths, which
approach independence asymptotically in time, and the ensemble-level behavior, where there is
ineradicable dependence, and indeed maximal dependence for continuous mixtures. Any one realization of a MAR, no matter how long, is indistinguishable from a realization of an absolutely
regular process which is a component of the mixture. The distinction between a MAR and a single
absolutely regular process only becomes apparent at the level of ensembles of paths.
It is desirable to characterize MARs. They are stationary, but non-ergodic and have non-vanishing
?(?). However, this is not sufficient to characterize them. Bernoulli shifts are stationary and
ergodic, but not absolutely regular9 . It follows that a mixture of Bernoulli shifts will be stationary,
and by the preceding argument will have positive ?(?), but will not be a MAR.
Roughly speaking, given the infinite past of a MAR, events in the future become asymptotically
independent as the separation between them increases10 . A more precise statement needs to control
?
the approach to independence of the component processes in a MAR. We say that ? is a ?-uniform
?
?
MAR when ? is a MAR, and, for ?-almost-all ?, ?? (k) ? ?(k), with ?(k) ? 0. Then if we con0
dition on finite histories X?n
and let n grow, widely separated future events become asymptotically
independent.
?
Since X is ?standard?, so is X? , and the latter?s ?-field ?(X??
) has a countable generating
basis, say B.
?
?1 Pn?1
t
For each B ? B, the set T?,B = x ? X
: limn?? n
measurable,
t=0 1B (? x) = ?(B) exists, is T
is dynamically invariant, and, by Bikrhoff?s ergodic theorem, ?(T?,B ) = 1 [18, ?7.9]. Then T? ? B?B T?,B
also has ?-measure 1, because B is countable.
8
Since ? 6= ?0 , there exists at least one set C with ?(C) 6= ?0 (C). The set T?,C then cannot overlap at all
with the set T?0 ,C , and so T? cannot intersect T?0 .
9
They are, however, mixing in the sense of ergodic theory [28].
10
0
?-almost-surely, X??
belongs to the typical set of a unique absolutely regular process, say ?, and so the
0
?
0
l
?
posterior concentrates on that ?, ?(? | X??
) = ?? . Hence ?(X1l , Xl+k
| X??
) = ?(X??
, Xl+k
), which
l
?
tends to ?(X?? )?(Xl+k ) as k ? ? because ? is absolutely regular.
7
6
?
Theorem 3 A stationary process ? is a ?-uniform
MAR if and only if
"
lim lim E sup
k?? n??
l
sup
?(B |
? )
B??(Xk+l
0
X1l , X?n
)
? ?(B |
#
0
X?n
)
=0
(7)
Before proceeding to the proof, it is worth remarking on the order of the limits: for finite n, condi0
tioning on X?n
still gives a MAR, not a single (albeit random) absolutely-regular process. Hence
the k ? ? limit for fixed n would always be positive, and indeed 1 for a continuous ?.
P ROOF ?Only if?: Re-write Eq. 7, expressing ? in terms of ? and the extremal processes:
#
"
Z
0
0
0
?(B | X1l , X?n
) ? ?(B | X?n
) d?(? | X?n
)
lim lim E sup sup
k?? n??
l
? )
B??(Xk+l
0
0
0
l
As n ? ?, ?(B | X?n
) ? ?(B | X??
), and ?(B | X1l , X?n
) ? ?(B | X??
). But,
?
?
in expectation, both of these are within ?(k) of ?(B), and so within 2?(k). ?If?: Consider the
contrapositive. If ? is not a uniform MAR, either it is a non-uniform MAR, or it is not a MAR at
?
all. If it is not a uniform MAR, then no matter what function ?(k)
tending to zero we propose,
the set of ? with ?? ? ?? must have positive ? measure, i.e., a positive-measure set of processes
must converge arbitrarily slowly. Therefore there must exist a set B (or a sequence of such sets)
witnessing this arbitrarily slow convergence, and hence the limit in Eq. 7 will be strictly positive. If
? is not a MAR at all, we know from the ergodic decomposition of stationary processes that it must
be a mixture of ergodic processes, and so it must give positive ? weight to processes which are not
absolutely regular at all, i.e., ? for which ?? (?) > 0. The witnessing events B for these processes
a fortiori drive the limit in Eq. 7 above zero.
5
Discussion and future work
We have shown that with the right conditioning, a natural and useful notion of predictive PAC
emerges. This notion is natural in the sense that for a learner sampling from a mixture of ergodic
processes, the only thing that matters is the future behavior of the component he is ?stuck? in ? and
certainly not the process average over all the components. This insight enables us to adapt the recent
PAC bounds for mixing processes to mixtures of such processes. An interesting question then is to
characterize those processes that are convex mixtures of a given kind of ergodic process (de Finetti?s
theorem was the first such characterization). In this paper, we have addressed this question for
mixtures of uniformly absolutely regular processes. Another fascinating question is how to extend
predictive PAC to real-valued functions [33, 34].
References
[1] Mathukumalli Vidyasagar. A Theory of Learning and Generalization: With Applications to
Neural Networks and Control Systems. Springer-Verlag, Berlin, 1997.
[2] Patrizia Berti and Pietro Rigo. A Glivenko-Cantelli theorem for exchangeable random variables. Statistics and Probability Letters, 32:385?391, 1997.
[3] Vladimir Pestov. Predictive PAC learnability: A paradigm for learning from exchangeable
input data. In Proceedings of the 2010 IEEE International Conference on Granular Computing
(GrC 2010), pages 387?391, Los Alamitos, California, 2010. IEEE Computer Society. URL
http://arxiv.org/abs/1006.1129.
[4] Bin Yu. Rates of convergence for empirical processes of stationary mixing sequences. Annals
of Probability, 22:94?116, 1994. URL http://projecteuclid.org/euclid.aop/
1176988849.
[5] M. Vidyasagar. Learning and Generalization: With Applications to Neural Networks.
Springer-Verlag, Berlin, second edition, 2003.
[6] Terrence M. Adams and Andrew B. Nobel. Uniform convergence of Vapnik-Chervonenkis
classes under ergodic sampling. Annals of Probability, 38:1345?1367, 2010. URL http:
//arxiv.org/abs/1010.3162.
7
[7] Ramon van Handel. The universal Glivenko-Cantelli property. Probability Theory and Related
Fields, 155:911?934, 2013. doi: 10.1007/s00440-012-0416-5. URL http://arxiv.org/
abs/1009.4434.
[8] Dharmendra S. Modha and Elias Masry. Memory-universal prediction of stationary random
processes. IEEE Transactions on Information Theory, 44:117?133, 1998. doi: 10.1109/18.
650998.
[9] Ron Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, 39:5?34, 2000. URL http://www.ee.technion.ac.il/?rmeir/
Publications/MeirTimeSeries00.pdf.
[10] Rajeeva L. Karandikar and Mathukumalli Vidyasagar. Rates of uniform convergence of empirical means with mixing processes. Statistics and Probability Letters, 58:297?307, 2002. doi:
10.1016/S0167-7152(02)00124-4.
[11] David Gamarnik. Extension of the PAC framework to finite and countable Markov chains.
IEEE Transactions on Information Theory, 49:338?345, 2003. doi: 10.1145/307400.307478.
[12] Ingo Steinwart and Andreas Christmann. Fast learning from non-i.i.d. observations. In Y. Bengio, D. Schuurmans, John Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances
in Neural Information Processing Systems 22 [NIPS 2009], pages 1768?1776. MIT Press,
Cambridge, Massachusetts, 2009. URL http://books.nips.cc/papers/files/
nips22/NIPS2009_1061.pdf.
[13] Mehryar Mohri and Afshin Rostamizadeh. Stability bounds for stationary ?-mixing and ?mixing processes. Journal of Machine Learning Research, 11, 2010. URL http://www.
jmlr.org/papers/v11/mohri10a.html.
[14] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-I.I.D. processes. In Daphne Koller, D. Schuurmans, Y. Bengio, and L?eon Bottou, editors, Advances
in Neural Information Processing Systems 21 [NIPS 2008], pages 1097?1104, 2009. URL
http://books.nips.cc/papers/files/nips21/NIPS2008_0419.pdf.
[15] Pierre Alquier and Olivier Wintenberger. Model selection for weakly dependent time series
forecasting. Bernoulli, 18:883?913, 2012. doi: 10.3150/11-BEJ359. URL http://arxiv.
org/abs/0902.2924.
[16] Ben London, Bert Huang, and Lise Getoor. Improved generalization bounds for largescale structured prediction. In NIPS Workshop on Algorithmic and Statistical Approaches
for Large Social Networks, 2012. URL http://linqs.cs.umd.edu/basilic/web/
Publications/2012/london:nips12asalsn/.
[17] Ben London, Bert Huang, Benjamin Taskar, and Lise Getoor. Collective stability in structured
prediction: Generalization from one example. In Sanjoy Dasgupta and David McAllester,
editors, Proceedings of the 30th International Conference on Machine Learning [ICML-13],
volume 28, pages 828?836, 2013. URL http://jmlr.org/proceedings/papers/
v28/london13.html.
[18] Robert M. Gray. Probability, Random Processes, and Ergodic Properties. Springer-Verlag,
New York, second edition, 2009. URL http://ee.stanford.edu/?gray/arp.
html.
[19] E. B. Dynkin. Sufficient statistics and extreme points. Annals of Probability, 6:705?730, 1978.
URL http://projecteuclid.org/euclid.aop/1176995424.
[20] Steffen L. Lauritzen. Extreme point models in statistics. Scandinavian Journal of Statistics,
11:65?91, 1984. URL http://www.jstor.org/pss/4615945. With discussion and
response.
[21] Olav Kallenberg. Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New
York, 2005.
[22] Persi Diaconis and David Freedman. De Finetti?s theorem for Markov chains. Annals of Probability, 8:115?130, 1980. doi: 10.1214/aop/1176994828. URL http://projecteuclid.
org/euclid.aop/1176994828.
?
[23] R. M. Dudley. A course on empirical processes. In Ecole
d??et?e de probabilit?es de Saint-Flour,
XII?1982, volume 1097 of Lecture Notes in Mathematics, pages 1?142. Springer, Berlin, 1984.
8
[24] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability
and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36:929?965, 1989. doi: 10.1145/76359.76371. URL http://users.soe.ucsc.
edu/?manfred/pubs/J14.pdf.
[25] St?ephane Boucheron, Olivier Bousquet, and G?abor Lugosi. Theory of classification: A survey
of recent advances. ESAIM: Probabability and Statistics, 9:323?375, 2005. URL http:
//www.numdam.org/item?id=PS_2005__9__323_0.
[26] Frank B. Knight. A predictive view of continuous time processes. Annals of Probability, 3:
573?596, 1975. URL http://projecteuclid.org/euclid.aop/1176996302.
[27] William Bialek, Ilya Nemenman, and Naftali Tishby. Predictability, complexity and learning.
Neural Computation, 13:2409?2463, 2001. URL http://arxiv.org/abs/physics/
0007070.
[28] Andrzej Lasota and Michael C. Mackey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Springer-Verlag, Berlin, 1994. First edition, Probabilistic Properties of Deterministic
Systems, Cambridge University Press, 1985.
[29] J?er?ome Dedecker, Paul Doukhan, Gabriel Lang, Jos?e Rafael Le?on R., Sana Louhichi, and
Cl?ementine Prieur. Weak Dependence: With Examples and Applications. Springer, New York,
2007.
[30] Norbert Wiener. Nonlinear prediction and dynamics. In Jerzy Neyman, editor, Proceedings
of the Third Berkeley Symposium on Mathematical Statistics and Probability, volume 3, pages
247?252, Berkeley, 1956. University of California Press. URL http://projecteuclid.
org/euclid.bsmsp/1200502197.
[31] Paul Doukhan. Mixing: Properties and Examples. Springer-Verlag, New York, 1995.
[32] Richard C. Bradley. Basic properties of strong mixing conditions. A survey and some open
questions. Probability Surveys, 2:107?144, 2005. URL http://arxiv.org/abs/math/
0511078.
[33] Noga Alon, Shai Ben-David, Nicol`o Cesa-Bianchi, and David Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44:615?631, 1997. doi:
10.1145/263867.263927. URL http://tau.ac.il/?nogaa/PDFS/learn3.pdf.
[34] Peter L. Bartlett and Philip M. Long. Prediction, learning, uniform convergence, and
scale-sensitive dimensions. Journal of Computer and Systems Science, 56:174?190, 1998.
doi: 10.1006/jcss.1997.1557. URL http://www.phillong.info/publications/
more_theorems.pdf.
9
| 5102 |@word version:2 open:2 seek:1 decomposition:7 attainable:1 pick:1 asks:1 initial:3 contains:1 series:2 pub:1 chervonenkis:2 ecole:1 past:14 existing:1 bradley:1 lang:1 written:1 must:12 john:1 fn:1 realistic:1 happen:1 gurion:1 enables:1 drop:1 aside:1 implying:1 stationary:14 selected:2 device:1 item:1 warmuth:1 mackey:1 xk:4 vanishing:1 indefinitely:1 manfred:2 characterization:4 math:1 complication:1 ron:1 org:15 simpler:4 daphne:1 mathematical:1 along:2 x1l:4 direct:2 aryeh:1 beta:1 become:2 ucsc:1 qualitative:1 consists:1 symposium:1 con0:1 expected:2 indeed:4 roughly:3 themselves:1 nor:1 intricate:1 behavior:3 steffen:1 little:1 actual:1 increasing:2 becomes:1 provided:2 nips22:1 notation:5 bounded:1 formidable:1 mass:1 israel:1 what:7 easiest:1 kind:1 minimizes:2 pursue:1 developed:1 finding:3 thorough:1 berkeley:2 multidimensional:1 hypothetical:1 innocent:1 classifier:1 exchangeable:10 whatever:1 unit:2 control:2 masry:1 producing:1 positive:6 before:1 tends:1 limit:5 consequence:2 id:1 establishing:2 modha:1 path:11 approximately:1 lugosi:1 might:2 dynamically:1 doukhan:2 statistically:1 unique:1 projecteuclid:5 testing:1 block:4 probabilit:1 intersect:1 empirical:12 universal:2 attain:1 composite:4 convenient:2 matching:1 word:1 weather:1 regular:26 v11:1 get:1 cannot:2 close:1 selection:2 put:2 risk:13 impossible:1 restriction:1 measurable:4 map:1 imposed:1 www:5 deterministic:1 go:3 williams:1 convex:5 ergodic:20 survey:3 immediately:1 insight:1 haussler:2 array:1 tioning:1 stability:2 notion:3 coordinate:1 variation:2 annals:5 target:5 suppose:5 controlling:1 user:1 speak:1 olivier:2 pa:1 trick:1 particularly:3 lay:1 taskar:1 jcss:1 culotta:1 handel:1 knight:1 benjamin:1 inaccessible:1 complexity:14 dynamic:2 raise:1 tight:1 weakly:1 predictive:16 creates:2 learner:4 basis:2 easily:1 joint:4 represented:1 separated:1 fast:2 london:3 tidy:1 doi:9 glivenko:3 whose:2 grave:1 quite:1 apparent:2 lag:2 say:4 widely:1 otherwise:2 valued:1 olav:1 stanford:1 statistic:8 dedecker:1 think:1 itself:2 sequence:19 propose:1 product:3 maximal:1 relevant:1 ome:1 realization:5 mixing:14 translate:1 los:1 convergence:6 regularity:2 extending:2 neumann:2 produce:1 generating:4 adam:1 rademacher:1 ben:4 depending:1 andrew:1 ac:3 alon:1 lauritzen:1 odd:1 progress:1 eq:3 exacerbated:1 strong:1 c:2 involves:1 implies:4 come:1 christmann:1 concentrate:1 undertaking:1 correct:1 stochastic:9 vc:6 mcallester:1 j14:1 bin:1 shalizi:1 fix:1 generalization:9 strictly:1 extension:1 jerzy:1 hold:3 practically:1 algorithmic:1 predict:1 anselm:1 major:1 purpose:1 lose:1 extremal:11 sensitive:3 minimization:4 mit:1 always:2 rather:3 pn:1 avoid:2 exchangeability:1 publication:3 corollary:4 derived:1 lise:2 pdfs:1 bernoulli:3 ps:1 cantelli:3 contrast:2 rostamizadeh:2 sense:7 helpful:1 dependent:5 typically:1 abor:1 koller:1 going:2 issue:3 mathukumalli:2 classification:1 orientation:1 html:3 proposes:1 dynkin:2 special:3 marginal:2 field:5 once:1 equal:1 having:1 atom:1 sampling:2 adversarially:1 look:1 yu:2 icml:1 future:17 others:1 ephane:1 fundamentally:1 richard:1 composed:1 diaconis:1 roof:3 individual:2 familiar:1 maintain:2 william:1 ab:6 nemenman:1 interest:1 nogaa:1 highly:1 certainly:2 function3:1 flour:1 mixture:28 truly:2 extreme:2 chain:3 partial:1 necessary:1 prieur:1 machinery:1 re:2 instance:3 increased:1 earlier:1 contiguous:1 rigo:2 cost:1 introducing:1 uniform:10 technion:1 tishby:1 learnability:4 characterize:3 straightforwardly:1 answer:1 considerably:1 st:1 international:2 probabilistic:2 terrence:1 linqs:1 physic:1 picking:1 michael:1 jos:1 kontorovich:1 ilya:1 vastly:1 von:2 again:1 central:1 cesa:1 huang:2 slowly:2 bgu:1 worse:1 admit:1 cosma:1 sidestep:1 leading:1 book:2 de:6 coefficient:5 matter:4 view:1 sup:8 dissolve:1 sort:3 complicated:1 parallel:1 shai:1 contrapositive:1 contribution:2 il:3 formed:1 wiener:2 ensemble:2 identify:2 yes:1 conceptually:1 weak:2 euclid:5 iid:17 worth:1 drive:1 cc:2 history:1 whenever:2 definition:7 against:1 pp:1 obvious:1 naturally:1 proof:2 proved:1 treatment:1 massachusetts:1 persi:1 recall:1 reminder:1 lim:5 emerges:1 subtle:2 actually:3 response:1 improved:1 though:2 strongly:1 mar:17 ergodicity:1 just:4 steinwart:1 web:1 nonlinear:2 meteorological:1 rohilla:1 perhaps:2 gray:2 believe:1 grows:1 name:1 usa:1 effect:1 concept:2 true:2 alquier:1 hence:3 symmetric:1 boucheron:1 ehrenfeucht:1 indistinguishable:1 naftali:1 x1n:11 anything:1 criterion:1 pdf:6 theoretic:1 confusion:1 l1:2 reasoning:1 image:1 consideration:2 novel:1 recently:1 gamarnik:1 chaos:1 common:6 tending:1 conditioning:3 volume:3 extend:1 he:2 approximates:1 association:1 marginals:1 mellon:1 significant:1 expressing:1 cambridge:2 rd:1 consistency:1 mathematics:1 moving:2 scandinavian:1 something:1 berti:2 posterior:4 fortiori:1 recent:2 inf:1 belongs:1 certain:1 verlag:6 binary:1 came:2 continue:2 arbitrarily:3 seen:2 additional:1 care:1 preceding:1 surely:1 converge:3 paradigm:1 full:1 desirable:1 technical:2 match:1 adapt:1 long:5 divided:1 concerning:1 prediction:7 regression:1 basic:1 essentially:1 cmu:1 expectation:10 arxiv:6 kernel:1 addition:1 addressed:1 singular:1 grow:1 limn:1 extra:3 noga:1 umd:1 posse:1 probably:1 file:2 suspect:1 thing:2 lafferty:1 seem:1 call:1 integer:1 ee:2 bengio:2 identically:1 enough:1 easy:1 independence:3 xj:2 approaching:1 pestov:3 idea:5 andreas:1 translates:1 shift:4 whether:2 motivated:1 sheva:1 bartlett:1 url:23 forecasting:2 penalty:3 peter:1 speaking:2 york:4 fractal:1 gabriel:1 useful:2 generally:1 informally:1 amount:1 nonparametric:1 simplest:1 continuation:2 http:23 meir:1 xij:1 exist:2 canonical:1 notice:1 rmeir:1 xii:1 carnegie:1 threefold:1 write:2 dasgupta:1 finetti:4 key:1 neither:1 kallenberg:1 asymptotically:5 graph:1 symbolically:1 pietro:1 run:2 letter:3 named:1 almost:5 reader:1 separation:3 bound:12 pay:1 display:1 dition:1 fascinating:1 jstor:1 ahead:1 infinity:1 worked:1 x2:2 bousquet:1 aspect:1 argument:2 formulating:1 attempting:2 department:2 structured:3 according:1 tv:1 combination:3 across:1 projecting:1 invariant:9 sided:2 taken:1 louhichi:1 neyman:1 mutually:1 turn:1 mind:1 know:1 sequences2:1 cor:1 apply:3 generic:2 appropriate:1 pierre:1 dudley:1 alternative:1 original:1 andrzej:2 include:1 trouble:1 ensure:1 saint:1 eon:1 especially:2 society:1 question:4 alamitos:1 strategy:1 dependence:14 bialek:1 said:1 conceivable:1 distance:2 berlin:4 philip:1 topic:1 argue:2 unstable:1 reason:1 nobel:1 afshin:2 length:2 index:1 vladimir:1 equivalently:1 mostly:1 robert:1 statement:2 frank:1 info:1 stated:1 suppress:1 rajeeva:1 sana:1 countable:4 collective:1 bianchi:1 upper:1 observation:3 markov:3 ingo:1 finite:11 immediate:1 looking:1 precise:3 bert:2 arbitrary:1 david:6 namely:1 extensive:1 california:2 distinction:1 narrow:1 nip:5 suggested:1 below:2 dynamical:2 remarking:1 including:1 tau:1 reliable:1 ramon:1 memory:1 vidyasagar:4 getoor:2 suitable:1 karyeh:1 natural:3 event:8 beautiful:1 difficulty:1 abbreviate:2 turning:1 overlap:1 largescale:1 representing:1 older:1 esaim:1 imply:1 aop:5 prior:1 nicol:1 law:9 loss:2 lecture:1 permutation:1 interesting:2 remarkable:1 granular:1 elia:1 sufficient:4 beer:1 principle:1 editor:4 course:4 hopeless:1 mohri:2 supported:3 free:11 taking:2 absolute:3 distributed:1 van:1 dimension:9 xn:5 stand:1 exceedingly:1 infinitedimensional:2 stuck:1 made:1 commonly:1 collection:1 perpetually:1 adaptive:1 far:3 social:1 transaction:2 rafael:1 technicality:1 supremum:1 global:1 conceptual:1 pittsburgh:1 xi:2 continuous:5 grc:1 v28:1 learn:2 inherently:1 symmetry:4 schuurmans:2 mehryar:2 bottou:1 necessarily:1 cl:1 pk:1 main:2 hierarchically:1 noise:1 paul:2 edition:3 freedman:1 x1:5 slow:1 predictability:1 theme:1 obeying:1 xl:3 lie:1 jmlr:2 third:1 karandikar:1 theorem:10 xt:5 specific:1 pac:32 er:1 learnable:8 decay:3 admits:4 exists:5 workshop:1 albeit:1 vapnik:2 importance:2 demand:2 easier:1 led:1 simply:2 expressed:1 norbert:1 partially:1 applies:1 springer:8 ch:3 minimizer:1 acm:1 ma:1 conditional:4 goal:5 blumer:1 quantifying:1 towards:1 shared:1 considerable:1 hard:1 change:3 soe:1 infinite:8 typical:2 uniformly:1 averaging:4 lemma:1 wintenberger:1 called:1 total:3 pas:1 sanjoy:1 arp:1 invariance:1 e:1 shannon:1 latter:2 arises:2 witnessing:2 brevity:1 absolutely:27 |
4,535 | 5,103 | Adaptivity to Local Smoothness and Dimension in
Kernel Regression
Samory Kpotufe
Toyota Technological Institute-Chicago?
[email protected]
Vikas K Garg
Toyota Technological Institute-Chicago
[email protected]
Abstract
We present the first result for kernel regression where the procedure adapts locally
at a point x to both the unknown local dimension of the metric space X and the
unknown H?older-continuity of the regression function at x. The result holds with
high probability simultaneously at all points x in a general metric space X of
unknown structure.
1
Introduction
Contemporary statistical procedures are making inroads into a diverse range of applications in the
natural sciences and engineering. However it is difficult to use those procedures ?off-the-shelf?
because they have to be properly tuned to the particular application. Without proper tuning their
prediction performance can suffer greatly. This is true in nonparametric regression (e.g. tree-based,
k-NN and kernel regression) where regression performance is particularly sensitive to how well the
method is tuned to the unknown problem parameters.
In this work, we present an adaptive kernel regression procedure, i.e. a procedure which self-tunes,
optimally, to the unknown parameters of the problem at hand.
We consider regression on a general metric space X of unknown metric dimension, where the output
Y is given as f (x) + noise. We are interested in adaptivity at any input point x ? X : the algorithm
must self-tune to the unknown local parameters of the problem at x. The most important such
parameters (see e.g. [1, 2]), are (1) the unknown smoothness of f , and (2) the unknown intrinsic
dimension, both defined over a neighborhood of x. Existing results on adaptivity have typically
treated these two problem parameters separately, resulting in methods that solve only part of the
self-tuning problem.
In kernel regression, the main algorithmic parameter to tune is the bandwidth h of the kernel. The
problem of (local) bandwidth selection at a point x ? X has received considerable attention in both
the theoretical and applied literature (see e.g. [3, 4, 5]). In this paper we present the first method
which provably adapts to both the unknown local intrinsic dimension and the unknown H?oldercontinuity of the regression function f at any point x in a metric space of unknown structure. The
intrinsic dimension and H?older-continuity are allowed to vary with x in the space, and the algorithm
must thus choose the bandwidth h as a function of the query x, for all possible x ? X.
It is unclear how to extend global bandwidth selection methods such as cross-validation to the local
bandwidth selection problem at x. The main difficulty is that of evaluating the regression error at x
since the ouput Y at x is unobserved. We do have the labeled training sample to guide us in selecting
h(x), and we will show an approach that guarantees a regression rate optimal in terms of the local
problem complexity at x.
?
Other affiliation: Max Planck Institute for Intelligent Systems, Germany
1
The result combines various insights from previous work on regression. In particular, to adapt to
H?older-continuity, we build on acclaimed results of Lepski et al. [6, 7, 8]. In particular some such
Lepski?s adaptive methods consist of monitoring the change in regression estimates fn,h (x) as the
bandwidth h is varied. The selected estimate has to meet some stability criteria. The stability criteria
is designed to ensure that the selected fn,h (x) is sufficiently close to a target estimate fn,h? (x) for
? known to yield an optimal regression rate. These methods however are generally
a bandwidth h
instantiated for regression in R, but extend to high-dimensional regression if the dimension of the
input space X is known. In this work however the dimension of X is unknown, and in fact X is
allowed to be a general metric space with significantly less regularity than usual Euclidean spaces.
To adapt to local dimension we build on recent insights of [9] where a k-NN procedure is shown to
adapt locally to intrinsic dimension. The general idea for selecting k = k(x) is to balance surrogates
of the unknown bias and variance of the estimate. As a surrogate for the bias, nearest neighbor
distances are used, assuming f is globally Lipschitz. Since Lipschitz-continuity is a special case of
H?older-continuity, the work of [9] corresponds in the present context to knowing the smoothness of
f everywhere. In this work we do not assume knowledge of the smoothness of f , but simply that f
is locally H?older-continuous with unknown H?older parameters.
Suppose we knew the smoothness of f at x, then we can derive an approach for selecting h(x),
similar to that of [9], by balancing the proper surrogates for the bias and variance of a kernel estimate.
? be the hypothetical bandwidth so-obtained. Since we don?t actually know the local smoothness
Let h
of f , our approach, similar to Lepski?s, is to monitor the change in estimates fn,h (x) as h varies, and
pick the estimate fn,h? (x) which is deemed close to the hypothetical estimate fn,h? (x) under some
stability condition.
? ?2d/(2?+d) n?2?/(2?+d) in terms of the local dimension d
We prove nearly optimal local rates O
at any point x and H?older parameters ?, ? depending also on x. Furthermore, the result holds with
high probability, simultaneously at all x ? X , for n sufficiently large. Note that we cannot unionbound over all x ? X , so the uniform result relies on proper conditioning on particular events in our
variance bounds on estimates fn,h (?).
We start with definitions and theoretical setup in Section 2. The procedure is given in Section 3,
followed by a technical overview of the result in Section 4. The analysis follows in Section 5.
2
2.1
Setup and Notation
Distribution and sample
We assume the input X belongs to a metric space (X , ?) of bounded diameter ?X ? 1. The output
Y belongs to a space Y of bounded diameter ?Y . We let ? denote the marginal measure on X and
?n denote the corresponding empirical distribution on an i.i.d. sample of size n. We assume for
simplicity that ?X and ?Y are known.
.
n
The algorithm runs on an i.i.d training sample {(Xi , Yi )}i=1 of size n. We use the notation X =
n
n
{Xi }1 and Y = {Yi }1 .
Regression function
.
We assume the regression function f (x) = E [Y |x] satisfies local H?older assumptions: for every
x ? X and r > 0, there exists ?, ? > 0 depending on x and r, such that f is (?, ?)-H?older at x on
B(x, r):
?x0 ? B(x, r)
|f (x) ? f (x0 )| ? ??(x, x0 )? .
We note that the ? parameter is usually assumed to be in the interval (0, 1] for global definitions
of H?older continuity, since a global ? > 1 implies that f is constant (for differentiable f ). Here
however, the definition being given relative to x, we can simply assume ? > 0. For instance the
function f (x) = x? is clearly locally ?-H?older at x = 0 with constant ? = 1 for any ? > 0. With
higher ? = ?(x), f gets flatter locally at x, and regression gets easier.
2
Notion of dimension
We use the following notion of metric-dimension, also employed in [9]. This notion extends some
global notions of metric dimension to local regions of space . Thus it allows for the intrinsic dimension of the data to vary over space. As argued in [9] (see also [10] for a more general theory) it often
coincides with other natural measures of dimension such as manifold dimension.
Definition 1. Fix x ? X , and r > 0. Let C ? 1 and d ? 1. The marginal ? is (C, d)-homogeneous
on B(x, r) if we have ?(B(x, r0 )) ? C?d ?(B(x, r0 )) for all r0 ? r and 0 < < 1.
In the above definition, d will be viewed as the local dimension at x. We will require a general
upper-bound d0 on the local dimension d(x) over any x in the space. This is defined below and can
be viewed as the worst-case intrinsic dimension over regions of space.
Assumption 1. The marginal ? is (C0 , d0 )-maximally-homogeneous for some C0 ? 1 and d0 ? 1,
i.e. the following holds for all x ? X and r > 0: suppose there exists C ? 1 and d ? 1 such that ?
is (C, d)-homogeneous on B(x, r), then ? is (C0 , d0 )-homogeneous on B(x, r).
Notice that if ? is (C, d)-homogeneous on some B(x, r), then it is (C0 , d0 )-homogeneous on
B(x, r) for any C0 > C and d0 > d. Thus, C0 , d0 can be viewed as global upper-bounds on
the local homogeneity constants. By the definition, it can be the case that ? is (C0 , d0 )-maximallyhomogeneous without being (C0 , d0 )-homogeneous on the entire space X .
The algorithm is assumed to know the upper-bound d0 . This is a minor assumption: in many situations where X is a subset of a Euclidean space RD , D can be used in place of d0 ; more generally, the
global metric entropy (log of covering numbers) of X can be used in the place of d0 (using known
relations between the present notion of dimension and metric entropies [9, 10]). The metric entropy
is relatively easy to estimate since it is a global quantity independent of any particular query x.
Finally we require that the local dimension is tight in small regions. This is captured by the following
assumption.
Assumption 2. There exists r? > 0, C 0 > 0 such that if ? is (C, d)-homogeneous on some B(x, r)
where r < r? , then for any r0 ? r, ?(B(x, r0 )) ? C 0 r0d .
This last assumption extends (to local regions of space) the common assumption that ? has an upperbounded density (relative to Lebesgue). This is however more general in that ? is not required to
have a density.
2.2
Kernel Regression
We consider a positive kernel K on [0, 1] highest at 0, decreasing on [0, 1], and 0 outside [0, 1]. The
kernel estimate is defined as follows: if B(x, h) ? X 6= ?,
X
K(?(x, Xi )/h)
fn,h (x) =
wi (x)Yi , where wi (x) = P
.
j K(?(x, Xj )/h)
i
We set wi (x) = 1/n, ?i ? [n] if B(x, h) ? X = ?.
3
Procedure for Bandwidth Selection at x
Definition 2. (Global cover size) Let > 0. Let N? () denote an upper-bound on the size of the
smallest -cover of (X , ?).
We assume the global quantity N? () is known or pre-estimated. Recall that, as discussed in Section
2, d0 can be picked to satisfy ln(N? ()) = O(d0 log(?X /)), in other words the procedure requires
only knowledge of upper-bounds N? () on global cover sizes.
The procedure is given as follows:
Fix = ?nX . For any x ? X , the set of admissible bandwidths is given as
dlog(?X /)e
32 ln(N? (/2)/?) \ ?X
?
Hx = h ? 16 : ?n (B(x, h/32)) ?
.
n
2i i=0
3
Let Cn,? ?
?
?h = 2
2K(0)
K(1)
? x , define
4 ln (N? (/2)/?) + 9C0 4d0 . For any h ? H
h
i
p
p
?2Y Cn,?
and Dh = fn,h (x) ? 2?
?h , fn,h (x) + 2?
?h .
n ? ?n (B(x, h/2))
At every x ? X select the bandwidth:
?
?
? = max h ? H
?x :
h
?
\
Dh0
? x :h0 <h
h0 ?H
?
?
6= ? .
?
The main difference with Lepski?s-type methods is in the parameter ?
?h . In Lepski?s method, since
d is assumed known, a better surrogate depending on d will be used.
4 Discussion of Results
We have the following main theorem.
d0
Theorem 1. Let 0 < ? < 1/e. Fix = ?X /n. Let Cn,? ? 2K(0)
+ 4 ln (N? (/2)/?) .
K(1) 9C0 4
4?d0
Define C2 =
. There exists N such that, for n > N , the following holds with probability at
6C0
least 1 ? 2? over the choice of (X, Y), simultaneously for all x ? X and all r satisfying
!1/(2?+d0 )
1/(2?+d0 )
?2Y Cn,?
2d0 C02 ?dX0
r? > r > rn , 2
.
C2 ?2
n
Let x ? X , and suppose f is (?, ?)-H?older at x on B(x, r). Suppose ? is (C, d)-homogeneous on
.
1
d0 ?d
B(x, r). Let Cr =
. We have
d0 r
CC0 ?X
f? (x) ? f (x)2 ? 96C0 2d0 ? ?2d/(2?+d)
h
2d ?2Y Cn,?
C2 Cr ?2 n
!2?/(2?+d)
.
n??
The result holds with high probability for all x ? X , and for all r? > r > rn , where rn ????? 0.
Thus, as n grows, the procedure is eventually adaptive to the H?older parameters in any neighborhood
of x. Note that the dimension d is the same for all r < r? by definition of r? . As previously
discussed, the definition of r? corresponds to a requirement that the intrinsic dimension is tight in
small enough regions. We believe this is a technical requirement due to our proof technique. We
hope this requirement might be removed in a longer version of the paper.
Notice that r is a factor of n in the upper-bound. Since the result holds simultaneously for all
r? > r > rn , the best tradeoff in terms of smoothness and size of r is achieved. A similar tradeoff
is observed in the result of [9].
?
As previously mentioned, the main idea behind the proof is to introduce hypothetical bandwidths h
2 2?
2
d
2 2?
?
and and h which balance respectively, ?
?h and ? h , and O(?Y /(nh )) and ? h (see Figure 1).
In the figure, d and ? are the unknown parameters in some neighborhood of point x.
The first part of the proof consists in showing that the variance of the estimate using a bandwidth
h is at most ?
?h . With high probability ?
?h is bounded above by O(?2Y /(nhd ). Thus by balancing
2
d
2 2?
?
O(?Y /(nh ) and ? h , using h we would achieve a rate of n?2?/(2?+d) . We then have to show
that the error of fn,h? cannot be too far from that fn,h? .
? being selected by the procedure, will be related to that of f ? .
Finally the error of fn,h? , h
n,h
The argument is a bit more nuanced that just described above and in Figure 1: the respective curves
O(?2Y /(nhd ) and ?2 h2? are changing with h since dimension and smoothness at x depend on the
size of the region considered. Special care has to be taken in the analysis to handle this technicality.
4
7
Cross Validation
Adaptive
6
NMSE
5
4
3
2
1
3000 4000 5000 6000 7000 8000 9000 10000
Training Size
? h
? which balance respectively, ?
Figure 1: (Left) The proof argues over h,
?h and ?2 h2? , and
d
2 2?
2
? selected by the procedure is shown to be close to
O(?Y /(nh )) and ? h . The estimates under h
?
? which is of the right adaptive form.
that of h, which in turn is shown to be close to that of h
(Right) Simulation results comparing the error of the proposed method to that of a global h
selected by cross-validation. The test size is 1000 for all experiments. X ? R70 has diameter
1, and is a collection of 3 disjoint flats (clusters) of dimension d1 = 2, d2 = 5, d3 = 10, and
equal mass 1/3. For each x from cluster i we have the output Y = (sin kxk)ki + N (0, 1)
where k1 = 0.8, k2 = 0.6, k3 = 0.4. For the implementation of the proposed method, we set
? Y /n?n (B(x, h)), where var
? Y is the variance of Y on the training sample. For both our
?
?h (x) = var
method and cross-validation, we use a box-kernel, and we vary h on an equidistant 100-knots grid
on the interval from the smallest to largest interpoint distance on the training sample.
5
Analysis
We will make use of the the following bias-variance decomposition throughout the analysis. For any
x ? X and bandwidth h, define the expected regression estimate
X
.
fen,h (x) = EY|X fn,h (x) =
wi f (Xi ).
i
We have
2
2
2
|fn,h (x) ? f (x)| ? 2 fn,h (x) ? fen,h (x) + 2 fen,h (x) ? f (x) .
(1)
The bias term above is easily bounded in a standard way. This is stated in the Lemma below.
Lemma 1 (Bias). Let x ? X , and suppose f is (?, ?)-H?older at x on B(x, h). For any h > 0, we
2
have fen,h (x) ? f (x) ? ?2 h2? .
P
Proof. We have fen,h (x) ? f (x) ? i wi (x) |f (Xi ) ? f (x)| ? ?h? .
The rest of this section is dedicated to the analysis of the variance term of (1). We will need various
supporting Lemmas relating the empirical mass of balls to their true mass. This is done in the next
subsection. The variance results follow in the subsequent subsection.
5.1
Supporting Lemmas
? x ().
We often argue over the following distributional counterpart to H
Definition 3. Let x ? X and > 0. Define
dlog(?X /)e
12 ln(N? (/2)/?) \ ?X
Hx () = h ? 8 : ?(B(x, h/8)) ?
.
n
2i i=0
5
Lemma 2. Fix > 0 and let Z denote an /2-cover of X , and let S =
?X
2i
dlog(?X /)e
. Define
i=0
. 4 ln(N? (/2)/?)
?n =
. With probability at least 1 ? ?, for all z ? Z and h ? S we have
n
p
?n (B(z, h)) ? ?(B(z, h)) + ?n ? ?(B(z, h)) + ?n /3,
p
?(B(z, h)) ? ?n (B(z, h)) + ?n ? ?n (B(z, h)) + ?n /3.
(2)
(3)
Idea. Apply Bernstein?s inequality followed by a union bound on Z and S .
The following two lemmas result from the above Lemma 2.
Lemma 3. Fix > 0 and 0 < ? < 1. With probability at least 1 ? ?, for all x ? X and h ? Hx (),
4?d0
we have for C1 = 3C0 4d0 and C2 =
,
6C0
C2 ?(B(x, h/2)) ? ?n (B(x, h/2)) ? C1 ?(B(x, h/2)).
? x () ? Hx ().
Lemma 4. Let 0 < ? < 1, and > 0. With probability at least 1??, for all x ? X , H
Proof. Again, let Z be an /2 cover and define S and ?n as in Lemma 2. Assume (2) in the
statement of Lemma 2. Let h > 16, we have for any z ? Z and x within /2 of z,
?n (B(x, h/32)) ? ?n (B(z, h/16)) ? 2?(B(z, h/16)) + 2?n ? 2?(B(x, h/8)) + 2?n ,
? x and conclude.
and we therefore have ?(B(x, h/8)) ? 12 ?n (B(x, h/32)) ? ?n . Pick h ? H
5.2
Bound on the variance
The following two results of Lemma 5 to 6 serve to bound the variance of the kernel estimate.
These results are standard and included here for completion. The main result of this section is the
variance bound of Lemma 7. This last lemma bounds the variance term of (1) with high probability
simultaneously for all x ? X and for values of h relevant to the algorithm.
Lemma 5. For any x ? X and h > 0:
2 X
EY|X fn,h (x) ? fen,h (x) ?
wi2 (x)?2Y .
i
Lemma 6. Suppose that for some x ? X and h > 0, ?n (B(x, h)) 6= 0. We then have:
P 2
K(0)
.
i wi (x) ? maxi wi (x) ?
K(1) ? n?n (B(x, h))
.
Lemma 7 (Variance bound). Let 0 < ? < 1/2 and > 0.
Define Cn,? =
2K(0)
d0
+ 4 ln (N? (/2)/?) , With probability at least 1 ? 3? over the choice of (X, Y),
K(1) 9C0 4
2
?2Y Cn,?
? x (), fn,h (x) ? fen,h (x) ?
.
for all x ? X and all h ? H
n?n (B(x, h/2))
? x () with
Proof. We prove the lemma statement for h ? Hx (). The result then follows for h ? H
?
the same probability since, by Lemma 4, Hx () ? Hx () under the same event of Lemma 2.
Consider any /2-cover Z of X . Define ?n as in Lemma 2 and assume statement (3). Let x ? X
and z ? Z within distance /2 of x. Let h ? Hx (). We have
?(B(x, h/8)) ? ?(B(z, h/4)) ? 2?n (B(z, h/4)) + 2?n ? 2?n (B(x, h/2)) + 2?n ,
and we therefore have ?n (B(x, h/2)) ? 21 ?(B(x, h/8)) ? ?n ? 21 ?n . Thus define Hz denote the
union of Hx () for x ? B(z, /2). With probability at least 1 ? ?, for all z ? Z, and x ? B(z, /2),
6
and h ? Hz the sets B(z, h)?X, B(x, h)?X are all non empty since they all contain B(x0 , h/2)?X
for some x0 such that h ? Hx0 () . The corresponding kernel estimates are therefore well defined.
Assume w.l.o.g. that Z is a minimal cover, i.e. all B(z, /2) contain some x ? X .
We first condition on X fixed and argue over the randomness in Y. For any x ? X and h > 0,
let Yx,h denote
the subset of Y corresponding to points from X falling in B(x, h). We define
.
?(Yx,h ) = fn,h (x) ? fen,h (x).
We note that changing any Yi value changes ?(Yz,h ) by at most ?Y wi (z). Applying McDiarmid?s
inequality and taking a union bound over z ? Z and h ? Hz , we get
?
?
?
P(?z ? Z, ?h ? S , ?(Yz,h ) > E?(Yz,h ) + t) ? N?2 (/2) exp ?
??
?
2t2
?.
X
?
2
2
?Y
wi (z)
i
We then have with probability at least 1 ? 2?, for all z ? Z and h ? Hz ,
2
2
X
N? (/2)
e
e
?2Y ?
wi2 (z)
fn,h (z) ? fn,h (z) ? 2 EY |X fn,h (z) ? fn,h (z) + 2 ln
?
i
K(0)?2Y
N? (/2)
,
(4)
? 4 ln
?
?
K(1) ? n?n (B(z, h))
where we apply Lemma 5 and 6 for the last inequality.
Now fix any z ? Z, h ? Hz and x ? B(z, /2). We have |?(Yx,h ) ? ?(Yz,h )| ?
max {?(Yx,h ), ?(Yz,h )} since both quantities are positive. Thus |?(Yx,h ) ? ?(Yz,h )| changes by
at most maxi,j {wi (z), wj (x)} ? ?Y if we change any Yi value out of the contributing Y values. By
K(0)
.
Lemma 6, maxi,j {wi (z), wj (x)} ? ?n,h (x, z) =
. Thus
nK(1) min(?n (B(x, h)), ?n (B(z, h)))
1
.
.
define ?h (x, z) =
|?(Yx,h ) ? ?(Yz,h ))| and ?h (z) =
sup
?h (x, z). By what we
?n,h (x, z)
x:?(x,z)?/2
just argued, changing any Yi makes ?h (z) vary by at most ?Y . We can therefore apply McDiarmid?s
inequality to have that, with probability at least 1 ? 3?, for all z ? Z and h ? Hz ,
r
2 ln(N? (/2)/?)
.
(5)
?h (z) ? EY|X ?h (z) + ?Y
2n
?
To bound the above expectation for any z and h ? Hz , consider a sequence {xi }1 , xi ? B(z, /2)
i??
such that ?h (xi , z) ???? ?h (z). Fix any such xi . Using Holder?s inequality and invoking Lemma
5 and Lemma 6, we have
p
EY|X (?(Yxi ,h ) ? ?(Yz,h )2 )
1
EY|X ?h (xi , z) =
EY|X |?(Yxi ,h ) ? ?(Yz,h )| ?
?n,h (xi , z)
?n,h (xi , z)
q
p
4?2Y ?n,h (xi , z)
2EY|X ?(Yxi ,h )2 + 2EY|X ?(Yz,h )2
?
?
?n,h (xi , z)
?n,h (xi , z)
s
2?Y
nK(1)?n (B(z, h))
=p
.
? 2?Y
K(0)
?n,h (xi , z)
Since ?h (xi , z) is bounded for all xi ? B(z, ), the Dominated Convergence Theorem yields
s
nK(1)?n (B(z, h))
EY|X ?h (z) = lim E Y|X ?h (xi , z) ? 2?Y
.
i??
K(0)
Therefore, using (5), we have for any z ? Z, any h ? Hz , and any x ? B(z, /2) that, with
probability at least 1 ? 3?
s
!
r
nK(1)?n (B(z, h))
2 ln(N? (/2)/?)
|?(Yx,h ) ? ?(Yz,h ))| ? ?Y ?n,h (x, z) 2
+
. (6)
K(0)
2n
7
Figure 2: Illustration of the selection procedure. The intervals Dh are shown containing f (x). We
will argue that fn,h? (x) cannot be too far from fn,h? (x).
Now notice that ?n,h (x, z) ?
K(0)
, so by Lemma 3,
nK(1)?n (B(x, h/2))
?n (B(z, h)) ? ?n (B(x, 2h)) ? C1 ?(B(x, 2h)) ? C1 C0 4d0 ?(B(x, h/2))
? C2 C1 C0 4d0 ?n (B(x, h/2)) ? C0 4d0 ?n (B(x, h/2)).
q
C04d0 K(0)
Hence, (6) becomes |?(Yx,h ) ? ?(Yz,h ))| ? 3?Y nK(1)?
.
n (B(x,h/2))
Combine with (4), using again the fact that ?n (B(z, h)) ? ?n (B(x, h/2)) to obtain
2
2
2
fn,h (x) ? fen,h (x) ? 2 fn,h (z) ? fen,h (z) + 2 |?(Yx,h ) ? ?(Yz,h ))|
?
5.3
2?2Y
? 9C0 4d0 + 4 ln (N? (/2)/?) .
n?n (B(x, h/2))
Adaptivity
The proof of Theorem 1 is given in the appendix. As previously discussed, the main part of the
argument consists of relating the error of fn,h? (x) to that of fn,h? (x) which is of the right form for
B(x, r) appropriately defined as in the theorem statement.
To relate the error of fn,h? (x) to that fn,h? (x), we employ a simple argument inspired by Lepski?s
? (see Figure 1 (Left)), for any h ? h
??
adaptivity work. Notice that, by definition of h
?h ? ?2 h2? .
2
?
Therefore by Lemma 1 and 7 that, for any h < h, kfn,h ? f k ? 2?
?h so the intervals Dh must
? ?h
? and D? and D? must
all contain f (x) and therefore must intersect. By the same argument h
h
h
intersect. Now since ?
?h is decreasing, we can infer that fn,h? (x) cannot be too far from fn,h? (x), so
their errors must be similar. This is illustrated in Figure 2.
References
[1] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348?
1360, 1980.
[2] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist.,
10:1340?1353, 1982.
[3] W. S. Cleveland and C. Loader. Smoothing by local regression: Principles and methods. Statistical theory and computational aspects of smoothing, 1049, 1996.
[4] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric
Regression. Springer, New York, NY, 2002.
8
[5] J. Lafferty and L. Wasserman. Rodeo: Sparse nonparametric regression in high dimensions.
Arxiv preprint math/0506342, 2005.
[6] O. V. Lepski, E. Mammen, and V. G. Spokoiny. Optimal spatial adaptation to inhomogeneous
smoothness: an approach based on kernel estimates with variable bandwidth selectors. The
Annals of Statistics, pages 929?947, 1997.
[7] O. V. Lepski and V. G. Spokoiny. Optimal pointwise adaptive methods in nonparametric estimation. The Annals of Statistics, 25(6):2512?2546, 1997.
[8] O. V. Lepski and B. Y. Levit. Adaptive minimax estimation of infinitely differentiable functions. Mathematical Methods of Statistics, 7(2):123?156, 1998.
[9] S. Kpotufe. k-NN Regression Adapts to Local Intrinsic Dimension. NIPS, 2011.
[10] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor
Methods for Learning and Vision: Theory and Practice, 2005.
9
| 5103 |@word version:1 c0:19 d2:1 simulation:1 decomposition:1 invoking:1 pick:2 selecting:3 tuned:2 existing:1 comparing:1 must:6 fn:33 subsequent:1 chicago:2 designed:1 selected:5 math:1 mcdiarmid:2 mathematical:1 c2:6 ouput:1 prove:2 consists:2 combine:2 introduce:1 x0:5 expected:1 inspired:1 globally:1 decreasing:2 becomes:1 cleveland:1 notation:2 bounded:5 mass:3 what:1 unobserved:1 guarantee:1 every:2 hypothetical:3 k2:1 planck:1 positive:2 engineering:1 local:20 meet:1 loader:1 might:1 garg:1 range:1 gyorfi:1 union:3 practice:1 procedure:14 intersect:2 empirical:2 significantly:1 hx0:1 pre:1 word:1 get:3 cannot:4 close:4 selection:5 context:1 applying:1 attention:1 simplicity:1 wasserman:1 insight:2 estimator:2 stability:3 handle:1 notion:5 searching:1 annals:2 target:1 suppose:6 homogeneous:9 satisfying:1 particularly:1 distributional:1 labeled:1 observed:1 preprint:1 worst:1 region:6 wj:2 technological:2 contemporary:1 highest:1 removed:1 mentioned:1 complexity:1 depend:1 tight:2 serve:1 easily:1 various:2 instantiated:1 query:2 neighborhood:3 outside:1 h0:2 solve:1 statistic:3 sequence:1 differentiable:2 adaptation:1 relevant:1 achieve:1 adapts:3 convergence:3 regularity:1 requirement:3 cluster:2 empty:1 yxi:3 derive:1 depending:3 completion:1 nearest:3 minor:1 received:1 implies:1 inhomogeneous:1 argued:2 require:2 hx:9 fix:7 hold:6 sufficiently:2 considered:1 exp:1 k3:1 algorithmic:1 vary:4 smallest:2 estimation:2 sensitive:1 largest:1 hope:1 clearly:1 shelf:1 cr:2 properly:1 greatly:1 nn:3 typically:1 entire:1 relation:1 interested:1 germany:1 provably:1 smoothing:2 special:2 spatial:1 marginal:3 equal:1 nearly:1 t2:1 intelligent:1 employ:1 simultaneously:5 homogeneity:1 lebesgue:1 upperbounded:1 behind:1 respective:1 tree:1 euclidean:2 walk:1 theoretical:2 minimal:1 instance:1 cover:7 subset:2 uniform:1 too:3 optimally:1 varies:1 density:2 off:1 vkg:1 again:2 containing:1 choose:1 flatter:1 spokoiny:2 satisfy:1 picked:1 sup:1 start:1 levit:1 holder:1 variance:13 yield:2 knot:1 monitoring:1 randomness:1 definition:11 kfn:1 proof:8 recall:1 knowledge:2 subsection:2 lim:1 actually:1 higher:1 follow:1 maximally:1 done:1 box:1 furthermore:1 just:2 hand:1 continuity:6 nuanced:1 believe:1 grows:1 contain:3 true:2 counterpart:1 hence:1 illustrated:1 sin:1 self:3 covering:1 mammen:1 coincides:1 criterion:2 stone:2 argues:1 dedicated:1 common:1 overview:1 conditioning:1 nh:3 extend:2 discussed:3 relating:2 smoothness:9 tuning:2 rd:1 grid:1 longer:1 recent:1 belongs:2 inequality:5 affiliation:1 yi:6 fen:10 captured:1 care:1 employed:1 r0:5 ey:10 infer:1 d0:30 technical:2 adapt:3 cross:4 prediction:1 regression:26 vision:1 metric:13 expectation:1 arxiv:1 kernel:14 achieved:1 c1:5 separately:1 interval:4 appropriately:1 rest:1 hz:8 lafferty:1 bernstein:1 easy:1 enough:1 xj:1 equidistant:1 bandwidth:15 idea:3 cn:7 knowing:1 tradeoff:2 krzyzak:1 clarkson:1 suffer:1 york:1 generally:2 tune:3 nonparametric:4 locally:5 statist:2 diameter:3 notice:4 estimated:1 disjoint:1 diverse:1 monitor:1 falling:1 d3:1 changing:3 run:1 everywhere:1 extends:2 place:2 c02:1 throughout:1 appendix:1 bit:1 bound:15 ki:1 followed:2 flat:1 dominated:1 aspect:1 argument:4 min:1 relatively:1 inroad:1 ball:1 wi:11 making:1 dlog:3 taken:1 ln:12 previously:3 turn:1 eventually:1 know:2 apply:3 vikas:1 ensure:1 yx:9 k1:1 build:2 yz:13 quantity:3 parametric:2 usual:1 surrogate:4 unclear:1 distance:3 nx:1 manifold:1 argue:3 assuming:1 pointwise:1 illustration:1 balance:3 difficult:1 setup:2 statement:4 relate:1 stated:1 implementation:1 proper:3 unknown:16 kpotufe:2 upper:6 supporting:2 situation:1 rn:4 varied:1 ttic:2 required:1 nip:1 usually:1 below:2 wi2:2 max:3 event:2 natural:2 treated:1 difficulty:1 minimax:1 older:14 cc0:1 deemed:1 acclaimed:1 literature:1 contributing:1 relative:2 adaptivity:5 var:2 validation:4 h2:4 principle:1 balancing:2 last:3 free:1 guide:1 bias:6 institute:3 neighbor:3 taking:1 sparse:1 curve:1 dimension:29 evaluating:1 collection:1 adaptive:7 far:3 selector:1 technicality:1 global:12 nhd:2 assumed:3 conclude:1 knew:1 xi:19 don:1 continuous:1 lepski:9 rodeo:1 main:7 noise:1 allowed:2 nmse:1 ny:1 samory:2 toyota:2 admissible:1 theorem:5 showing:1 maxi:3 intrinsic:8 consist:1 exists:4 nk:6 easier:1 entropy:3 simply:2 infinitely:1 kxk:1 springer:1 corresponds:2 satisfies:1 relies:1 dh:3 viewed:3 ann:2 lipschitz:2 considerable:1 change:5 included:1 lemma:27 select:1 dx0:1 interpoint:1 kohler:1 d1:1 |
4,536 | 5,104 | A Comparative Framework for
Preconditioned Lasso Algorithms
Michael I. Jordan
Nebojsa Jojic
Fabian L. Wauthier
Computer Science Division
Microsoft Research, Redmond
Statistics and WTCHG
[email protected] University of California, Berkeley
University of Oxford
[email protected]
[email protected]
Abstract
The Lasso is a cornerstone of modern multivariate data analysis, yet its performance suffers in the common situation in which covariates are correlated. This
limitation has led to a growing number of Preconditioned Lasso algorithms that
pre-multiply X and y by matrices PX , Py prior to running the standard Lasso. A
direct comparison of these and similar Lasso-style algorithms to the original Lasso
is difficult because the performance of all of these methods depends critically on
an auxiliary penalty parameter ?. In this paper we propose an agnostic framework
for comparing Preconditioned Lasso algorithms to the Lasso without having to
choose ?. We apply our framework to three Preconditioned Lasso instances and
highlight cases when they will outperform the Lasso. Additionally, our theory
reveals fragilities of these algorithms to which we provide partial solutions.
1
Introduction
Variable selection is a core inferential problem in a multitude of statistical analyses. Confronted with
a large number of (potentially) predictive variables, the goal is to select a small subset of variables
that can be used to construct a parsimonious model. Variable selection is especially relevant in linear
observation models of the form
y = X? ? + w
with w ? N (0, ? 2 In?n ),
(1)
?
where X is an n ? p matrix of features or predictors, ? is an unknown p-dimensional regression
parameter, and w is a noise vector. In high-dimensional settings where n p, ordinary least squares
is generally inappropriate. Assuming that ? ? is sparse (i.e., the support set S(? ? ) , {i|?i? 6= 0} has
cardinality k < n), a mainstay algorithm for such settings is the Lasso [10]:
1
2
Lasso: ?? = argmin??Rp
||y ? X?||2 + ? ||?||1 .
2n
(2)
For a particular choice of ?, the variable selection properties of the Lasso can be analyzed by quan? approximates the true support S(? ? ). More careful
tifying how well the estimated support S(?)
analyses focus instead on recovering the signed support S? (? ? ),
(
+1 if ?i? > 0
?
?1 if ?i? < 0 .
S? (?i ) ,
(3)
0 o.w.
Theoretical developments during the last decade have shed light onto the support recovery properties of the Lasso and highlighted practical difficulties when the columns of X are correlated. These
developments have led to various conditions on X for support recovery, such as the mutual incoherence or the irrepresentable condition [1, 3, 8, 12, 13].
1
In recent years, several modifications of the standard Lasso have been proposed to improve its
support recovery properties [2, 7, 14, 15]. In this paper we focus on a class of ?Preconditioned
Lasso? algorithms [5, 6, 9] that pre-multiply X and y by suitable matrices PX and Py to yield
? = PX X, y? = Py y, prior to running Lasso. Thus, the general strategy of these methods is
X
1
? = argmin
? ||?|| .
? 2 + ?
Preconditioned Lasso: ??
y? ? X?
(4)
??Rp
1
2
2n
Although this class of algorithms often compares favorably to the Lasso in practice, our theoretical
understanding of them is at present still fairly poor. Huang and Jojic [5], for example, consider
only empirical evaluations, while both Jia and Rohe [6] and Paul et al. [9] consider asymptotic
consistency under various assumptions. Important and necessary as they are, consistency results do
not provide insight into the relative performance of Preconditioned Lasso variants for finite data sets.
In this paper we provide a new theoretical basis for making such comparisons. Although the focus
of the paper is on problems of the form of Eq. (4), we note that the core ideas can also be applied to
algorithms that right-multiply X and/or y with some matrices (e.g., [4, 11]).
For particular instances of X, ? ? , we want to discover whether a given Preconditioned Lasso algorithm following Eq. (4) improves or degrades signed support recovery relative to the standard
Lasso of Eq. (2). A major roadblock to a one-to-one comparison are the auxiliary penalty param? which trade off the `1 penalty to the quadratic objective in both Eq. (2) and Eq. (4).
eters, ?, ?,
A correct choice of penalty parameter is essential for signed support recovery: If it is too small,
the algorithm behaves like ordinary least squares; if it is too large, the estimated support may be
empty. Unfortunately, in all but the simplest cases, pre-multiplying data X, y by matrices PX , Py
changes the relative geometry of the `1 penalty contours to the elliptical objective contours in a
nontrivial way. Suppose we wanted to compare the Lasso to the Preconditioned Lasso by choosing
? in Eq. (4). For a fair comparison, the resulting mapfor each ? in Eq. (2) a suitable, matching ?
ping would have to capture the change of relative geometry induced by preconditioning of X, y,
? = f (?, X, y, PX , Py ). It seems difficult to theoretically characterize such a mapping. Furi.e. ?
thermore, it seems unlikely that a comparative framework could be built by independently choosing
? Meinshausen and B?uhlmann [8], for example, demonstrate that a
?ideal? penalty parameters ?, ?:
seemingly reasonable oracle estimator of ? will not lead to consistent support recovery in the Lasso.
In the Preconditioned Lasso literature this problem is commonly sidestepped either by resorting
to asymptotic comparisons [6, 9], empirically comparing regularization paths [5], or using modelselection techniques which aim to choose reasonably ?good? matching penalty parameters [6]. We
deem these approaches to be unsatisfactory?asymptotic and empirical analyses provide limited insight, and model selection strategies add a layer of complexity that may lead to unfair comparisons.
It is our view that all of these approaches place unnecessary emphasis on particular choices of
penalty parameter. In this paper we propose an alternative strategy that instead compares the Lasso to
the Preconditioned Lasso by comparing data-dependent upper and lower penalty parameter bounds.
Specifically, we give bounds (?u , ?l ) on ? so that the Lasso in Eq. (2) is guaranteed to recover the
signed support iff ?l < ? < ?u . Consequently, if ?l > ?u signed support recovery is not possible.
? = PX X, y? = Py y and will thus induce new
The Preconditioned Lasso in Eq. (4) uses data X
?
?
?
bounds (?u , ?l ) on ?. The comparison of Lasso and Preconditioned Lasso on an instance X, ? ?
? The advantage of this approach is that
then proceeds by suitably comparing the bounds on ? and ?.
the upper and lower bounds are easy to compute, even though a general mapping between specific
penalty parameters cannot be readily derived.
To demonstrate the effectiveness of our framework, we use it to analyze three Preconditioned Lasso
algorithms [5, 6, 9]. Using our framework we make several contributions: (1) We confirm intuitions
about advantages and disadvantages of the algorithms proposed in [5, 9]; (2) We show that for an
SVD-based construction of n ? p matrices X, the algorithm in [6] changes the bounds deterministically; (3) We show that in the context of our framework, this SVD-based construction can be
thought of as a limit point of a Gaussian construction.
The paper is organized as follows. In Section 2 we will discuss three recent instances of Eq. (4). We
outline our comparative framework in Section 3 and highlight some immediate consequences for [5]
and [9] on general matrices X in Section 4. More detailed comparisons can be made by considering
a generative model for X. In Section 5 we introduce such a model based on a block-wise SVD of X
and then analyze [6] for specific instances of this generative model. Finally, we show that in terms
of signed support recovery, this generative model can be thought of as a limit point of a Gaussian
2
construction. Section 6 concludes with some final thoughts. The proofs of all lemmas and theorems
are in the supplementary material.
2
Preconditioned Lasso Algorithms
Our interest lies in the class of Preconditioned Lasso algorithms that is summarized by Eq. (4).
Extensions to related algorithms, such as [4, 11] will follow readily. In this section we focus on three
recent Preconditioned Lasso examples and instantiate the matrices PX , Py appropriately. Detailed
derivations can be found in the supplementary material. For later reference, we will denote each
algorithm by the author initials.
Huang and Jojic [5] (HJ). Huang and Jojic proposed Correlation Sifting [5], which, although
not presented as a preconditioning algorithm, can be rewritten as one. Let the SVD of X be X =
U DV > . Given an algorithm parameter q, let UA be the set of q smallest left singular vectors of X 1 .
Then HJ amounts to setting
>
PX = Py = UA UA
.
(5)
Paul et al. [9] (PBHT). An earlier instance of the preconditioning idea was put forward by Paul
et al. [9]. For some algorithm parameter q, let A be the q column indices of X with largest absolute
correlation to y, (i.e., where |Xj> y|/||Xj ||2 is largest). Define UA to be the q largest left singular
vectors of XA . With this, PBHT can be expressed as setting
>
Py = UA UA
.
PX = In?n
(6)
Jia and Rohe [6] (JR). Jia and Rohe [6] propose a preconditioning method that amounts to
whitening the matrix X. If X = U DV > is full rank, then JR defines2
?1/2 >
PX = Py = U DD>
U .
(7)
>
>
>
>
>
>
?X
? = PX XX P ? In?n and if n > p then X
? X
? = X P PX X ? Ip?p .
If n < p then X
X
X
Both HJ and PBHT estimate a basis UA for a q-dimensional subspace onto which they project y
and/or X. However, since the methods differ substantially in their assumptions, the estimators differ
also. Empirical results in [5] and [9] suggest that the respective assumptions are useful in a variety of
situations. In contrast, JR reweights the column space directions U and requires no extra parameter
q to be estimated.
3
Comparative Framework
In this section we propose a new comparative approach for Preconditioned Lasso algorithms which
? We first derive upper and lower bounds for ?
avoids choosing particular penalty parameters ?, ?.
?
? satisfy the bounds.
and ? respectively so that signed support recovery can be guaranteed iff ? and ?
We then compare estimators by comparing the resulting bounds.
3.1
Conditions for signed support recovery
Before proceeding, we make some definitions motivated by Wainwright [12]. Suppose that the
support set of ? ? is S , S(? ? ), with |S| = k. To simplify notation, we will assume throughout that
S = {1, . . . , k} so that the corresponding off-support set is S c = {1, . . . , p} \S, with |S c | = p ? k.
Denote by Xj column j of X and by XA the submatrix of X consisting of columns indexed by set
A. Define the following variables: For all j ? S c and i ? S, let
w
?j = Xj> XS (XS> XS )?1 sgn(?S? )
?j = Xj> In?n ? XS (XS> XS )?1 XS>
(8)
n
?1
?1
1 >
1 >
w
?i = e>
X XS
sgn(?S? )
i = e >
X XS
XS> .
(9)
i
i
n S
n S
n
1
The choice of smallest singular vectors is considered for matrices X with sharply decaying spectrum.
We note that Jia and Rohe [6] let D be square, so that it can be directly inverted. If X is not full rank, the
pseudo-inverse of D can be used.
2
3
1
? = S ?( ? ?) )
P ( S ? ( ?)
? = S ?( ? ?) )
P ( S ? ( ?)
1
0.8
0.6
0.4
0.2
0
0.5
1
f
0.8
0.6
0.4
0.2
0
0.5
1.5
(a) Signed support recovery around ?l .
1
f
1.5
(b) Signed support recovery around ?u .
Figure 1: Empirical evaluation of the penalty parameter bounds of Lemma 1. For each of 500
synthetic Lasso problems (n = 300, p = 1000, k = 10) we computed ?l , ?u as per Lemma 1.
Then we ran Lasso using penalty parameters f ?l in Figure (a) and f ?u in Figure (b), where the
factor f = 0.5, . . . , 1.5. The figures show the empirical probability of signed support recovery as a
function of the factor f for both ?l and ?u . As expected, the probabilities change sharply at f = 1.
For the traditional Lasso of Eq. (2), results in (for example) Wainwright [12] connect settings of ?
with instances of X, ? ? , w to certify whether or not Lasso will recover the signed support. We invert
these results and, for particular instances of X, ? ? , w, derive bounds on ? so that signed support
recovery is guaranteed if and only if the bounds are satisfied. Specifically, we prove the following
Lemma in the supplementary material.
Lemma 1. Suppose that XS> XS is invertible, |?j | < 1, ?j ? S c , and sgn(?i? )?i > 0, ?i ? S. Then
? = S? (? ? )) if and
the Lasso has a unique solution ?? which recovers the signed support (i.e., S? (?)
only if ?l < ? < ?u , where
?
? i + i
?j
,
?l = maxc
(10)
?u = min
j?S (2J?j > 0K ? 1) ? ?j
i?S
?i
+
J?K denotes the indicator function and | ? |+ = max(0, ?) denotes the hinge function. On the other
hand, if XS> XS is not invertible, then the signed support cannot in general be recovered.
Lemma 1 recapitulates well-worn intuitions about when the Lasso has difficulty recovering the
signed support. For instance, assuming that w has symmetric distribution with mean 0, if 1 ? |?j |
is small (i.e., the irrepresentable condition almost fails to hold), then ?l will tend to be large. In
extreme cases we might have ?l > ?u so that signed support recovery is impossible. Figure 1 empirically validates the bounds of Lemma 1 by estimating probabilities of signed support recovery for
a range of penalty parameters on synthetic Lasso problems.
3.2
Comparisons
In this paper we propose to compare a preconditioning algorithm to the traditional Lasso by comparing the penalty parameter bounds produced by Lemma 1. As highlighted in Eq. 4, the precondition? = PX X, y? = Py y. For the purpose of applying
ing framework runs Lasso on modified variables X
Lemma 1, these transformations induce a new noise vector
? ? = Py (X? ? + w) ? PX X? ? .
w
? = y? ? X?
(11)
? ? ? we can
Note that if PX = Py then w
? = Py w. Provided the conditions of Lemma 1 hold for X,
?
?
? can
define updated variables ?
?j , ??i , ??j , ?i from which the bounds ?u , ?l on the penalty parameter ?
be derived. In order for our comparison to be scale-invariant, we will compare algorithms by ratios
of resulting penalty parameter bounds. That is, we deem a Preconditioned Lasso algorithm to be
? u /?
? l > ?u /?l . Intuitively, the upper bound ?
? u is then
more effective than the traditional Lasso if ?
?
?3.
disproportionately larger than ?l relative to ?u and ?l , which in principle allows easier tuning of ?
?
?
?
?
We will later encounter the special case ?u 6= 0, ?l = 0 in which case we define ?u /?l , ? to
? u /?
? l < 1 then signed support recovery is
indicate that the preconditioned problem is very easy. If ?
? u /?
? l , 0 if ?
?u = ?
? l = 0.
in general impossible. Finally, to match this intuition, we define ?
3
?l , ?
? u could also be considered. However, we find the ratio to be a particuOther functions of ?l , ?u and ?
larly intuitive measure.
4
4
General Comparisons
We begin our comparisons with some immediate consequences of Lemma 1 for HJ and PBHT. In
order to highlight the utility of the proposed framework, we focus in this section on special cases of
PX , Py . The framework can of course also be applied to general matrices PX , Py . As we will see,
both HJ and PBHT have the potential to improve signed support recovery relative to the traditional
Lasso, provided the matrices PX , Py are suitably estimated. The following notation will be used
during our comparisons: We will write A? A to indicate that random variable A stochastically
? that is, ?t P(A? ? t) ? P(A ? t). We also let US be a minimal basis for the column
dominates A,
space of the submatrix XS , and define span(US ) = x ?c ? Rk s.t. x = US c ? Rn . Finally, we
let US c be a minimal basis for the orthogonal complement of span(US ).
>
Consequences for HJ. Recall from Section 2 that HJ uses PX = Py = UA UA
, where UA is a
column basis estimated from X. We have the following theorem:
Theorem 1. Suppose that the conditions of Lemma 1 are met for a fixed instance of X, ? ? . If
span(US ) ? span(UA ), then after preconditioning using HJ the conditions continue to hold, and
?u
?u
?
? ,
(12)
?l
?l
where the stochasticity on both sides is due to independent noise vectors w. On the other hand, if
>
PX XS is not invertible, then HJ cannot in general recover the signed support.
XS> PX
We briefly sketch the proof of Theorem 1. If span(US ) ? span(UA ) then plugging in the definition
of PX into ?
?j , ??i , ??j , ?i , one can derive the following
?
?j = ?j
??i = ?i
(13)
>
>
>w
??j = Xj In?n ? US US UA UA
?i = i .
(14)
n
If span(UA ) = span(US ), then it is easy to see that ??j = 0. Notice that because ?
?j and ??i are unchanged, if the conditions of Lemma 1 hold for the original Lasso problem (i.e., XS> XS is invertible,
|?j | < 1 ?j ? S c and sgn(?i? )?i > 0 ?i ? S), they will continue to hold for the preconditioned
problem. Suppose then that the conditions set forth in Lemma 1 are met. With some additional work
one can show that
?i? + ?i
??j
?
? l = max
= ?u
?
?l .
(15)
?u = min
c
i?S
j?S (2J?
??i
?j > 0K ? 1) ? ?
?j
+
? l , ?l are both independent of ?
? u = ?u . Note that if
The result then follows by showing that ?
?
?
?
span(UA ) = span(US ), then ?l = 0 and so ?u /?l , ?. In the more common case when
span(US ) 6? span(UA ) the performance of the Lasso depends on how misaligned UA and US are. In
>
extreme cases, XS> PX
PX XS is singular and so signed support recovery is not in general possible.
>
Consequences for PBHT. Recall from Section 2 that PBHT uses PX = In?n , Py = UA UA
,
where UA is a column basis estimated from X. We have the following theorem.
Theorem 2. Suppose that the conditions of Lemma 1 are met for a fixed instance of X, ? ? . If
span(US ) ? span(UA ), after preconditioning using PBHT the conditions continue to hold, and
?u
?u
?
(16)
? ,
?l
?l
where the stochasticity on both sides is due to independent noise vectors w. On the other hand, if
span(US c ) = span(UA ), then PBHT cannot recover the signed support.
As before, we sketch the proof to build some intuition. Because PBHT does not set PX = Py as HJ
>
does, there is no danger of XS> PX
PX XS becoming singular. On the other hand, this complicates
>
the form of the induced noise vector w.
? Plugging PX and Py into Eq. (11), we find w
? = (UA UA
?
?
>
In?n )X? + UA UA w. However, even though the noise has a more complicated form, derivations
in the supplementary material show that if span(US ) ? span(UA ), then
?
?j = ?j
??i = ?i
(17)
>
>
>w
??j = Xj In?n ? US US UA UA
?i = i .
(18)
n
5
2.5
0.6
0.4
0.2
0
Orthogonal data
Gaussian data
2
Lasso
55
35
25
15
10
0
1000
2000
3000
4000
? u/ ?
?l
?
? u/ ? l
P ( ? u /? l < t )
1
0.8
1.5
1
0.5
0
0.2
5000
t
0.4
0.6
0.8
1
1.2
1.4
f
(a) Empirical validation of Theorems 1 and 2.
(b) Evaluation of JR on Gaussian ensembles.
Figure 2: Experimental evaluations. Figure (a) shows empirical c.d.f.?s of penalty parameter bounds
ratios estimated from 1000 variable selection problems. Each problem consists of Gaussians X and
w, and ? ? , with n = 100, p = 300, k = 5. The blue curve shows the c.d.f. for ?u /?l estimated on
>
the original data (Lasso). Then we projected the data using PX = Py = UA UA
, where span(US ) ?
span(UA ) but dim(UA ) = dim(span(UA )) is variable (see legend), and estimated the resulting
? u /?
? l . As predicted by Theorems 1 and 2, ?u /?l ?
? u /?
? l . In
c.d.f. for the updated bounds ratio ?
2
Figure (b) the blue curve shows the scale factorp
(p ? k)/(n + p? ? k) predicted by Theorem 3 for
problems constructed from Eq. (19) for ? = f 1 ? (n/p). The red curve plots the corresponding
factor estimated from the Gaussian construction in Eq. (25) (n = 100, m = 2000, p = 200, k = 5)
using the same ?S , ?S c as in Theorem 3, averaged over 50 problem instances and with error bars
for one standard deviation. As in Theorem 3, the factor is approximately 1 if f = 1.
As with HJ, if span(UA ) = span(US ), then ??j = 0. Because ?
?j and ??i are again unchanged,
the conditions of Lemma 1 will continue to hold for the preconditioned problem if they hold for
the original Lasso problem. With the previous equalities established, the remainder of the proof
is identical to that of Theorem 1. The fact that the above ?
?j , ??j , ??i , ?i are identical to those of HJ
depends crucially on the fact that span(US ) ? span(UA ). In general the values will differ because
PBHT sets PX = In?n , but HJ does not.
On the other hand, if span(US ) 6? span(UA ) then the distribution of ?i depends on how misaligned
UA and US are. In the extreme case when span(US c ) = span(UA ), one can show that ?i = ??i? ,
? u = 0, ?
? l ?l . Because P(?
? l ? 0) = 1, signed support recovery is not possible.
which results in ?
Remarks. Our theoretical analyses show that both HJ and PBHT can indeed lead to improved
signed support recovery relative to the Lasso on finite datasets. To underline our findings, we empirically validate Theorems 1 and 2 in Figure 2(a), where we plot estimated c.d.f.?s for penalty
parameter bounds ratios of Lasso and Preconditioned Lasso for various subspaces UA . Our theorems focussed on specific settings of PX , Py and ignored others. In general, the gains of HJ and
PBHT over Lasso depend on how much the decoy signals in XS c are suppressed and how much of
the true signal due to XS is preserved. Further comparison of HJ and PBHT must thus analyze how
the subspaces span(UA ) are estimated in the context of the assumptions made in [5] and [9]. A final
note concerns the dimension of the subspace span(UA ). Both HJ and PBHT were proposed with the
implicit goal of finding a basis UA that has the same span as US . This of course requires estimating
|S| = k by q, which adds another layer of complexity to these algorithms. Theorems 1 and 2 suggest that underestimating k can be more detrimental to signed support recovery than overestimating
it. By overestimating q > k, we can trade off milder improvement when span(US ) ? span(UA )
against poor behavior should we have span(US ) 6? span(UA ).
5
Model-Based Comparisons
In the previous section we used Lemma 1 in conjunction with assumptions on UA to make statements
about HJ and PBHT. Of course, the quality of the estimated UA depends on the specific instances
X, ? ? , w, which hinders a general analysis. Similarly, a direct application of Lemma 1 to JR yields
bounds that exhibit strong X dependence. It is possible to crystallize prototypical examples by
specializing X and w to come from a generative model. In this section we briefly present this model
and will show the resulting penalty parameter bounds for JR.
6
5.1
Generative model for X
As discussed in Section 2, many preconditioning algorithms can be phrased as truncating or
reweighting column subspaces associated with X [5, 6, 9]. This suggests that a natural generative
model for X can be formulated in terms of the SVD of submatrices of X.
Assume p ? k > n and let ?S , ?S c be fixed-spectrum matrices of dimension n ? k and n ?
p ? k respectively. We will assume throughout this paper that the top left ?diagonal? entries of
?S , ?S c are positive and the remainder is zero. Furthermore, we let U, VS , VS c be orthonormal
bases of dimension n ? n, k ? k and p ? k ? p ? k respectively. We assume that these bases are
chosen uniformly at random from the corresponding Stiefel manifold. As before and without loss of
generality, suppose S = {1, . . . , k}. Then we let the Lasso problem be
y = X? ? + w with X = U ?S VS> , ?S c VS>c
w ? N (0, ? 2 In?n ),
(19)
To ensure that the column norms of X are controlled, we compute the spectra ?S , ?S c by normal? S and ?
? S c with arbitrary positive elements on the diagonal. Specifically, we let
izing spectra ?
?S =
?S ?
?
kn
? S ||F
||?
?S c =
? Sc p
?
(p ? k)n.
? S c ||F
||?
(20)
We verify in the supplementary material that with these assumptions the squared column norms of
X are in expectation n (provided the orthonormal bases are chosen uniformly at random).
Intuition.
Note that any matrix X can be decomposed using a block-wise SVD as
X = [XS , XS c ] = U ?S VS> , T ?S c VS>c ,
(21)
with orthonormal bases U, T, VS , VS c . Our model in Eq. (19) is only a minor restriction of this
model, where we set T = In?n . To develop more intuition, let us temporarily set VS = Ik?k ,
VS c = Ip?k?p?k . Then X = [XS , XS c ] = U [?S , ?S c ] and we see that up to scaling XS equals
the first k columns of XS c . The difficulty for Lasso thus lies in correctly selecting the columns in
XS , which are highly correlated with the first few columns in XS c .
5.2
Piecewise constant spectra
For notational clarity we will now focus on a special case of the above model. To begin, we develop
some notation. In previous sections we used US to denote a basis for the column space of XS . We
will continue to use this notation, and let US contain the first k columns of U . Accordingly, we
? S , ?S c , ?
? Sc
denote the last n ? k columns of U by US c . We let the diagonal elements of ?S , ?
?S
be identified by their column indices. That is, the diagonal entries ?S,c of ?S and ?
?S,c of ?
?
are indexed by c ? {1, . . . , k}; the diagonal entries ?S c ,c of ?S c and ?
?S c ,c of ?S c are indexed
by c ? {1, . . . , n}. Each of the diagonal entries in ?S , ?S c is associated with a column of U .
The set of diagonal entries of ?S and ?S c associated with US is ?(S) = {1, . . . , k} and the set
of diagonal entries in ?S c associated with US c is ?(S c ) = {1, . . . , n}\?(S). We will construct
spectrum matrices ?S , ?S c that are piecewise constant on their diagonals. For some ? ? 0, we let
?
?S,i = 1, ?
?S c ,i = ? ?i ? ?(S) and ?
?S c ,j = 1 ?j ? ?(S c ).
?1/2 >
Consequences for JR. Recall that for JR, if X = U DV > , then PX = Py = U DD>
U .
We have the following theorem.
Theorem 3. Assume the Lasso problem was generated according to the generative model of
Eq. (19) with p
?i ? ?(S), ?
?S,i = 1, ?
?S c ,i = ? and ?j ? ?(S c ), ?
?S c ,j = 1 and that
?
? < n ? k/ k(p ? k ? 1). Then the conditions of Lemma 1 hold before and after preconditioning using JR. Moreover,
?u
?
(p ? k) ?u
=
.
?
n
+
p?2 ? k ?l
?l
(22)
In other words, JR deterministically scales the ratio of penalty parameter bounds. The proof idea
is as follows. It is easy to see that XS> XS is always invertible. Furthermore, one can show that if
7
p
?
? < n ? k/ k(p ? k ? 1), we have |?j | < 1, ?j ? S c and sgn(?i? )?i > 0, ?i ? S. Thus, by our
assumptions, the preconditions of Lemma 1 are satisfied for the original Lasso problem. Plugging in
the definitions of ?S , ?S c into Eq. (19) we find that the SVD becomes X = U DV > , where U is the
same column basis as in Eq. (19), and the diagonal elements of D are determined by ?. Substituting
this into the definitions of ?
?j , ??i , ??j , ?i , we have that after preconditioning using JR
n(p ? k)?2
?
?j = ?j
??i = n + 2
?i
(23)
k? + n ? k
(k?2 + n ? k)
??j =
?j
?i = i .
(24)
n(p ? k)
Thus, if the conditions of Lemma 1 hold for X, ? ? , they will continue to hold after preconditioning using JR. Furthermore, notice that (2J?
?j > 0K ? 1) ? ?
?j = (2J?j > 0K ? 1) ? ?j . Applying
? u /?
? l as claimed. According to Theorem 3 the ratio ?
? u /?
? l will
Lemma 1 then gives the new p
ratio ?
p
be larger than ?u /?l iff ? < 1 ? (n/p). Indeed, if ? = 1 ? (n/p) then PX = Py ? In?n and
so JR coincides with standard Lasso.
5.3
Extension to Gaussian ensembles
The construction in Eq. (19) uses an orthonormal matrix U as the column basis of X. At first
sight this may appear to be restrictive. However, as we show in the supplementary material, one
can construct Lasso problems using a Gaussian basis W m which lead to penalty parameter bounds
ratios that converge in distribution to those of the Lasso problem in Eq. (19). For some fixed ? ? , VS ,
VS c , ?S and ?S c , generate two independent problems: One using Eq. (19), and one according to
1
m
y m = X m ? ? + wm with X m = ? W m ?S VS> , ?S c VS>c
wm ? N 0, ? 2 Im?m , (25)
n
n
where W m is an m ? n standard Gaussian ensemble. Note that an X so constructed is low rank if
n < p. The latter generative model bears some resemblance to Gaussian models considered in Paul
et al. [9] (Eq. (7)) and Jia and Rohe [6] (Proposition 2). Note that while the problem in Eq. (19) uses
n observations with noise variance ? 2 , Eq. (25) has m observations with noise variance ? 2 m/n.
The increased variance is necessary because the matrix W m has expected column length m, while
columns in U are of length 1. We will think of n as fixed and will let m ? ?. Let the penalty
parameter bounds ratio induced by the problem in Eq. (19) be ?u /?l and that induced by Eq. (25)
m
be ?m
u /?l . Then we have the following result.
Theorem 4. Let VS , VS c , ?S , ?S c and ? ? be fixed. If the conditions of Lemma 1 hold for X, ? ? ,
then for m large enough they will hold for X m , ? ? . Furthermore, as m ? ?
?m
u d ?u
?
,
(26)
m
?l
?l
where the stochasticity on the left is due to W m , wm and on the right is due to w.
Thus, with respect to the bounds ratio ?u /?l , the construction of Eq. (19) can be thought of as the
limiting construction of Gaussian Lasso problems in Eq. (25) for large m. As such, we believe
that Eq. (19) is a useful proxy for less restrictive generative models. Indeed, as the experiment in
Figure 2(b) shows, Theorem 3 can be used to predict the scaling factor for penalty parameter bounds
? u /?
? l / (?u /?l )) with good accuracy even for Gaussian ensembles.
ratios (i.e., ?
6
Conclusions
This paper proposes a new framework for comparing Preconditioned Lasso algorithms to the standard Lasso which skirts the difficulty of choosing penalty parameters. By eliminating this parameter
from consideration, finite data comparisons can be greatly simplified, avoiding the use of model
selection strategies. To demonstrate the framework?s usefulness, we applied it to a number of Preconditioned Lasso algorithms and in the process confirmed intuitions and revealed fragilities and
mitigation strategies. Additionally, we presented an SVD-based generative model for Lasso problems that can be thought of as the limit point of a less restrictive Gaussian model. We believe this
work to be a first step towards a comprehensive theory for evaluating and comparing Lasso-style
algorithms and believe that the strategy can be extended to comparing other penalized likelihood
methods on finite datasets.
8
References
[1] D.L. Donoho, M. Elad, and V.N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. Information Theory, IEEE Transactions on, 52(1):6?18,
2006.
[2] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96:1348?1360, 2001.
[3] J.J. Fuchs. Recovery of exact sparse representations in the presence of bounded noise. Information Theory, IEEE Transactions on, 51(10):3601?3608, 2005.
[4] H.-C. Huang, N.-J. Hsu, D.M. Theobald, and F.J. Breidt. Spatial Lasso with applications to GIS
model selection. Journal of Computational and Graphical Statistics, 19(4):963?983, 2010.
[5] J.C. Huang and N. Jojic. Variable selection through Correlation Sifting. In V. Bafna and
S.C. Sahinalp, editors, RECOMB, volume 6577 of Lecture Notes in Computer Science, pages
106?123. Springer, 2011.
[6] J. Jia and K. Rohe. ?Preconditioning? to comply with the irrepresentable condition. 2012.
[7] N. Meinshausen. Lasso with relaxation. Technical Report 129, Eidgen?ossische Technische
Hochschule, Z?urich, 2005.
[8] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the
Lasso. Annals of Statistics, 34(3):1436?1462, 2006.
[9] D. Paul, E. Bair, T. Hastie, and R. Tibshirani. ?Preconditioning? for feature selection and
regression in high-dimensional problems. Annals of Statistics, 36(4):1595?1618, 2008.
[10] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1994.
[11] R.J. Tibshirani. The solution path of the Generalized Lasso. Stanford University, 2011.
[12] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
`1 -constrained quadratic programming (Lasso). IEEE Transactions on Information Theory,
55(5):2183?2202, 2009.
[13] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2563, 2006.
[14] H. Zou. The Adaptive Lasso and its oracle properties. Journal of the American Statistical
Association, 101:1418?1429, 2006.
[15] H. Zou and T. Hastie. Regularization and variable selection via the Elastic Net. Journal of the
Royal Statistical Society, Series B, 67:301?320, 2005.
9
| 5104 |@word briefly:2 eliminating:1 seems:2 norm:2 underline:1 suitably:2 crucially:1 initial:1 series:2 selecting:1 elliptical:1 com:1 comparing:9 recovered:1 yet:1 must:1 readily:2 wanted:1 plot:2 v:16 nebojsa:1 generative:10 instantiate:1 accordingly:1 core:2 underestimating:1 mitigation:1 constructed:2 direct:2 ik:1 prove:1 consists:1 introduce:1 theoretically:1 indeed:3 expected:2 behavior:1 growing:1 decomposed:1 inappropriate:1 cardinality:1 param:1 deem:2 considering:1 discover:1 ua:48 xx:1 project:1 agnostic:1 notation:4 estimating:2 provided:3 moreover:1 argmin:2 bounded:1 substantially:1 finding:2 transformation:1 pseudo:1 berkeley:2 shed:1 uk:1 appear:1 before:4 positive:2 limit:3 consequence:5 mainstay:1 oxford:1 incoherence:1 path:2 becoming:1 approximately:1 signed:26 might:1 emphasis:1 meinshausen:3 suggests:1 misaligned:2 limited:1 range:1 averaged:1 practical:1 unique:1 practice:1 block:2 danger:1 empirical:7 submatrices:1 thought:5 inferential:1 matching:2 pre:3 induce:2 word:1 suggest:2 onto:2 irrepresentable:3 selection:14 cannot:4 put:1 context:2 impossible:2 applying:2 py:25 restriction:1 worn:1 urich:1 independently:1 truncating:1 recovery:25 stats:1 insight:2 estimator:3 orthonormal:4 updated:2 limiting:1 construction:8 suppose:7 annals:2 exact:1 programming:1 us:5 larly:1 element:3 modelselection:1 capture:1 precondition:2 hinders:1 trade:2 thermore:1 sifting:2 ran:1 intuition:7 complexity:2 covariates:1 depend:1 predictive:1 division:1 basis:11 preconditioning:13 various:3 derivation:2 effective:1 sc:2 choosing:4 ossische:1 supplementary:6 larger:2 elad:1 stanford:1 statistic:4 gi:1 think:1 highlighted:2 noisy:1 validates:1 final:2 seemingly:1 ip:2 confronted:1 advantage:2 net:1 propose:5 remainder:2 relevant:1 iff:3 forth:1 intuitive:1 validate:1 empty:1 comparative:5 derive:3 develop:2 ac:1 minor:1 eq:31 strong:1 auxiliary:2 c:1 recovering:2 indicate:2 predicted:2 met:3 differ:3 direction:1 come:1 correct:1 sgn:5 material:6 disproportionately:1 proposition:1 im:1 extension:2 hold:13 around:2 considered:3 normal:1 mapping:2 predict:1 substituting:1 major:1 smallest:2 purpose:1 uhlmann:2 largest:3 sidestepped:1 gaussian:12 always:1 aim:1 modified:1 sight:1 hj:18 shrinkage:1 conjunction:1 derived:2 focus:6 flw:1 improvement:1 unsatisfactory:1 rank:3 notational:1 likelihood:2 greatly:1 contrast:1 dim:2 milder:1 dependent:1 unlikely:1 development:2 proposes:1 spatial:1 special:3 fairly:1 mutual:1 constrained:1 equal:1 construct:3 having:1 identical:2 yu:1 others:1 report:1 simplify:1 overestimating:2 few:1 piecewise:2 modern:1 comprehensive:1 geometry:2 consisting:1 microsoft:2 interest:1 highly:1 multiply:3 evaluation:4 analyzed:1 extreme:3 light:1 partial:1 necessary:2 respective:1 orthogonal:2 indexed:3 overcomplete:1 skirt:1 theoretical:4 minimal:2 complicates:1 increased:1 column:23 earlier:1 instance:13 disadvantage:1 ordinary:2 deviation:1 subset:1 entry:6 technische:1 predictor:1 usefulness:1 too:2 characterize:1 hochschule:1 connect:1 kn:1 synthetic:2 off:3 invertible:5 michael:1 again:1 squared:1 satisfied:2 choose:2 huang:5 reweights:1 stochastically:1 american:2 zhao:1 style:2 li:1 potential:1 summarized:1 satisfy:1 depends:5 later:2 view:1 analyze:3 red:1 wm:3 recover:4 decaying:1 complicated:1 jia:6 contribution:1 square:3 accuracy:1 variance:3 ensemble:4 yield:2 eters:1 critically:1 produced:1 multiplying:1 confirmed:1 begin:2 ping:1 maxc:1 suffers:1 definition:4 against:1 associated:4 proof:5 recovers:1 gain:1 hsu:1 recall:3 improves:1 organized:1 follow:1 improved:1 ox:1 though:2 generality:1 furthermore:4 xa:2 implicit:1 correlation:3 hand:5 sketch:2 reweighting:1 quality:1 resemblance:1 believe:3 verify:1 true:2 contain:1 regularization:2 equality:1 jojic:6 symmetric:1 during:2 coincides:1 generalized:1 outline:1 demonstrate:3 stiefel:1 wise:2 consideration:1 common:2 behaves:1 empirically:3 volume:1 discussed:1 association:2 approximates:1 crystallize:1 tuning:1 consistency:3 resorting:1 similarly:1 stochasticity:3 stable:1 whitening:1 add:2 base:4 multivariate:1 recent:3 claimed:1 continue:6 inverted:1 additional:1 converge:1 signal:2 full:2 ing:1 technical:1 match:1 plugging:3 specializing:1 controlled:1 variant:1 regression:3 expectation:1 invert:1 preserved:1 want:1 singular:5 appropriately:1 extra:1 induced:4 tend:1 quan:1 legend:1 nonconcave:1 effectiveness:1 jordan:2 presence:2 ideal:1 revealed:1 easy:4 enough:1 variety:1 xj:7 hastie:2 lasso:79 identified:1 idea:3 whether:2 motivated:1 bair:1 utility:1 fuchs:1 penalty:25 remark:1 cornerstone:1 generally:1 useful:2 detailed:2 ignored:1 amount:2 recapitulates:1 simplest:1 generate:1 outperform:1 notice:2 estimated:13 certify:1 per:1 correctly:1 tibshirani:3 blue:2 write:1 threshold:1 clarity:1 graph:1 relaxation:1 year:1 run:1 inverse:1 place:1 throughout:2 reasonable:1 almost:1 parsimonious:1 scaling:2 submatrix:2 layer:2 bound:27 guaranteed:3 fan:1 quadratic:2 oracle:3 nontrivial:1 sharply:2 phrased:1 min:2 span:36 px:34 according:3 poor:2 jr:13 suppressed:1 modification:1 making:1 dv:4 invariant:1 intuitively:1 discus:1 gaussians:1 rewritten:1 apply:1 alternative:1 encounter:1 rp:2 original:5 denotes:2 running:2 top:1 ensure:1 graphical:1 hinge:1 restrictive:3 especially:1 build:1 society:2 unchanged:2 objective:2 strategy:6 degrades:1 dependence:1 traditional:4 diagonal:10 exhibit:1 detrimental:1 subspace:5 wauthier:1 manifold:1 preconditioned:25 assuming:2 length:2 index:2 ratio:12 decoy:1 bafna:1 difficult:2 unfortunately:1 potentially:1 statement:1 favorably:1 unknown:1 upper:4 observation:3 datasets:2 fabian:1 finite:4 immediate:2 situation:2 extended:1 rn:1 arbitrary:1 sharp:1 complement:1 california:1 established:1 redmond:1 proceeds:1 bar:1 sparsity:1 built:1 max:2 royal:2 wainwright:3 suitable:2 difficulty:4 natural:1 indicator:1 improve:2 concludes:1 prior:2 understanding:1 literature:1 comply:1 asymptotic:3 relative:7 loss:1 lecture:1 highlight:3 bear:1 prototypical:1 limitation:1 recomb:1 validation:1 eidgen:1 consistent:1 proxy:1 dd:2 principle:1 becomes:1 editor:1 course:3 penalized:2 last:2 side:2 focussed:1 absolute:1 sparse:3 curve:3 dimension:3 evaluating:1 avoids:1 contour:2 author:1 commonly:1 made:2 forward:1 projected:1 simplified:1 adaptive:1 transaction:3 temlyakov:1 confirm:1 reveals:1 unnecessary:1 spectrum:6 decade:1 additionally:2 reasonably:1 fragility:2 elastic:1 zou:2 noise:10 paul:5 fair:1 fails:1 deterministically:2 lie:2 unfair:1 theorem:20 rk:1 rohe:6 specific:4 showing:1 x:36 multitude:1 dominates:1 concern:1 essential:1 izing:1 easier:1 led:2 expressed:1 temporarily:1 springer:1 goal:2 formulated:1 consequently:1 careful:1 towards:1 donoho:1 change:4 specifically:3 determined:1 uniformly:2 lemma:23 svd:8 experimental:1 select:1 support:36 latter:1 roadblock:1 avoiding:1 correlated:3 |
4,537 | 5,105 | New Subsampling Algorithms for Fast Least Squares
Regression
Paramveer S. Dhillon1 Yichao Lu2 Dean Foster2
Lyle Ungar1
1
2
Computer & Information Science, Statistics (Wharton School)
University of Pennsylvania, Philadelphia, PA, U.S.A
{dhillon|ungar}@cis.upenn.edu
[email protected], [email protected]
Abstract
We address the problem of fast estimation of ordinary least squares (OLS) from
large amounts of data (n p). We propose three methods which solve the big
data problem by subsampling the covariance matrix using either a single or two
stage estimation. All three run in the order ofp
size of input i.e. O(np) and our best
method, Uluru, gives an error bound of O( p/n) which is independent of the
amount of subsampling as long as it is above a threshold. We provide theoretical
bounds for our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. We also compare the
performance of our methods on synthetic and real-world datasets and show that if
observations are i.i.d., sub-Gaussian then one can directly subsample without the
expensive Randomized Hadamard preconditioning without loss of accuracy.
1
Introduction
Ordinary Least Squares (OLS) is one of the oldest and most widely studied statistical estimation
methods with its origins tracing back over two centuries. It is the workhorse of fields as diverse
as Machine Learning, Statistics, Econometrics, Computational Biology and Physics. To keep pace
with the growing amounts of data ever faster ways of estimating OLS are sought. This paper focuses
on the setting (n p), where n is the number of observations and p is the number of covariates or
features, a common one for web scale data.
Numerous approaches to this problem have been proposed [1, 2, 3, 4, 5]. The predominant approach
to solving big data OLS estimation involves using some kind of random projections, for instance,
transforming the data with a randomized Hadamard transform [6] or Fourier transform and then
uniformly sampling observations from the resulting transformed matrix and estimating OLS on this
smaller data set. The intuition behind this approach is that these frequency domain transformations
uniformize the data and smear the signal across all the observations so that there are no longer
any high leverage points whose omission could unduly influence the parameter estimates. Hence,
a uniform sampling in this transformed space suffices. Another way of looking at this approach is
as preconditioning the design matrix with a carefully constructed data-independent random matrix
before subsampling. This approach has been used by a variety of papers proposing methods such as
the Subsampled Randomized Hadamard Transform (SRHT) [1, 4] and the Subsampled Randomized
Fourier Transform (SRFT) [2, 3]. There is also publicly available software which implements these
ideas [7]. It is worth noting that these approaches assume a fixed design setting.
Following this line of work, in this paper we provide two main contributions:
1
1. Novel Subsampling Algorithms for OLS: We propose three novel1 algorithms for fast
estimation of OLS which work by subsampling the covariance matrix. Some recent results in [8] allow us to bound the difference between the parameter vector (w)
b we estimate from the subsampled data and the true underlying parameter (w0 ) which generates
the data. We provide theoretical analysis of our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. The
error bound of our best algorithm, Uluru, is independent of the fraction of data subsampled
(above a minimum threshold of sub-sampling) and depends only on the characteristics of
the data/design matrix X.
2. Randomized Hadamard preconditioning not always needed: We show that the error
bounds for all the three algorithms are similar for both the fixed design and the subGaussian random design setting. In other words, one can either transform the data/design
matrix via Randomized Hadamard transform (fixed design setting) and then use any of our
three algorithms or, if the observations are i.i.d. and sub-Gaussian, one can directly use
any of our three algorithms. Thus, another contribution of this paper is to show that if
the observations are i.i.d. and sub-Gaussian then one does not need the slow Randomized
Hadamard preconditioning step and one can get similar accuracies much faster.
The remainder of the paper is organized as follows: In the next section, we formally define notation for the regression problem, then in Sections 3 and 4, we describe our algorithms and provide
theorems characterizing their performance. Finally, we compare the empirical performance of our
methods on synthetic and real world data.
2
Notation and Preliminaries
Let X be the n ? p design matrix. For the random design case we assume the rows of X are n i.i.d
samples from the 1 ? p independent variable (a.k.a. ?covariates? or ?predictors?) X. Y is the real
valued n ? 1 response vector which contains n corresponding values of the dependent variable Y
(in general we use bold letter for samples and normal letter for random variables or vectors). is
the n ? 1 homoskedastic noise vector with common variance ? 2 . We want to infer w0 i.e. the p ? 1
population parameter vector that generated the data.
More formally, we can write the true model as:
Y = Xw0 +
?iid N (0, ? 2 )
The sample solution to the above equation (in matrix notation) is given by
w
bsample = (X> X)?1 X> Y and by consistency of the OLS estimator we know that w
bsample ?d w0
as n ? ?. Classical algorithms to estimate w
bsample use QR decomposition or bidiagonalization [9]
and they require O(np2 ) floating point operations.
Since our algorithms are based on subsampling the covariance matrix, we need some extra notation.
Let r = nsubs /n (< 1) be the subsampling ratio, giving the ratio of the number of observations
(nsubs ) in the subsampled matrix Xsubs fraction to the number of observations (n) in the original
X matrix. I.e., r is the fraction of the observations sampled. Let Xrem , Yrem denote the data and
>
response vector for the remaining n ? nsubs observations. In other words X> = [X>
subs ; Xrem ] and
>
>
>
Y = [Ysubs ; Yrem ].
Also, let ?XX be the covariance of X and ?XY be the covariance between X and Y . Then, for
the fixed design setting ?XX = X> X/n and ?XY = X> Y/n and for the random design setting
?XX = E(X> X) and ?XY = E(X> Y).
The bounds presented in this paper are expressed in terms of the Mean Squared Error (or Risk) for
the `2 loss. For the fixed design setting,
M SE = (w0 ? w)
b > X> X(w0 ? w)/n
b
= (w0 ? w)
b > ?XX (w0 ? w)
b
For the random design setting
M SE = EX kXw0 ? X wk
b 2 = (w0 ? w)
b > ?XX (w0 ? w)
b
1
One of our algorithms (FS) is similar to [4] as we describe in Related Work. However, even for that
algorithm, our theoretical analysis is novel.
2
2.1
Design Matrix and Preconditioning
Thus far, we have not made any assumptions about the design matrix X. In fact, our algorithms and
analysis work for both fixed design and random design settings.
As mentioned earlier, our algorithms involve subsampling the observations, so we have to ensure
that we do not leave behind any observations which are outliers/high leverage points; This is done
differently for fixed and random designs. For the fixed design setting the design matrix X is arbitrary
and may contain high leverage points. Therefore before subsampling we precondition the matrix by
a Randomized Hadamard/Fourier Transform [1, 4] and after conditioning, the probability of having
high leverage points in the new design matrix becomes very small. On the other hand, if we assume
X to be random design and its rows are i.i.d. draws from some nice distribution like sub-Gaussian,
then the probability of having high leverage points is very small and we can happily subsample X
without preconditioning.
In this paper we analyze both the fixed as well as sub-Gaussian random design settings. Since the
fixed design analysis would involve transforming the design matrix with a preconditioner before
subsampling, some background on SRHT is warranted.
Subsampled Randomized Hadamard Transform (SRHT): In the fixed design setting we precondition
and subsample the data with a nsubs ? n randomized hadamard transform matrix ?(=
q
n
RHD) as ? ? X.
n
subs
The matrices R, H, and D are defined as:
? R ? Rnsubs ?n is a set of nsubs rows from the n ? n identity matrix, where the rows are
chosen uniformly at random without replacement.
? D ? Rn?n is a random diagonal matrix whose entries are independent random signs, i.e.
random variables uniformly distributed on {?1}.
Hn/2 Hn/2
n?n
? H?R
is a normalized Walsh-Hadamard matrix, defined as: Hn =
Hn/2 ?Hn/2
+1 +1
with, H2 =
. H = ?1n Hn is a rescaled version of Hn .
+1 ?1
It is worth noting that HD is the preconditioning matrix and R is the subsampling matrix.
The running time of SRHT is n p log(p) floating point operations (FLOPS) [4]. [4] mention fixing nsubs = O(p). However, in our experiments we vary the amount of subsampling, which is
not something recommended by their theory. With varying subsampling, the run time becomes
O(n p log(nsubs )).
3
Three subsampling algorithms for fast linear regression
All our algorithms subsample the X matrix followed by a single or two stage fitting and are described
below. The algorithms given below are for the random design setting. The algorithms for the fixed
design are exactly the same as below, except that Xsubs , Ysubs are replaced by ? ? X, ? ? Y and
Xrem , Yrem with ?rem ? X, ?rem ? Y, where ? is the SRHT matrix defined in the previous section
and ?rem is the same as ?, except that R is of size nrem ? n. Still, for the sake of completeness,
the algorithms are described in detail in the Supplementary material.
Full Subsampling (FS): Full subsampling provides a baseline for comparison; In it we simply
r-subsample (X, Y) as (Xsubs , Ysubs ) and use the subsampled data to estimate both the ?XX and
?XY covariance matrices.
Covariance Subsampling (CovS): In Covariance Subsampling we r-subsample X as Xsubs only to
estimate the ?XX covariance matrix; we use all the n observations to compute the ?XY covariance
matrix.
3
Uluru : Uluru2 is a two stage fitting algorithm. In the first stage it uses the r-subsampled (X, Y)
to get an initial estimate of w
b (i.e., w
bF S ) via the Full Subsampling (FS) algorithm. In the second stage it uses the remaining data (Xrem , Yrem ) to estimate the bias of the first stage estimator
wcorrect = w0 ? w
bF S . The final estimate (wU luru ) is taken to be a weighted combination (generally just the sum) of the FS estimator and the second stage estimator (w
bcorrect ). Uluru is described
in Algorithm 1.
In the second stage, since w
bF S is known, on the remaining data we have Yrem = Xrem w0 + rem ,
hence
Rrem
= Yrem ? Xrem ? w
bF S
= Xrem (w0 ? w
bF S ) + rem
The above formula shows we can estimate wcorrect = w0 ? w
bF S with another regression, i.e.
?1 >
>
w
bcorrect = (X>
X
)
X
R
.
Since
computing
X
X
rem rem
rem rem
rem rem takes too many FLOPS, we use
X>
X
instead
(which
has
already
been
computed).
Finally
we correct w
bF S and w
bcorrect to get
sub sub
w
bU luru . The estimate wcorrect can be seen as an almost unbiased estimation of the error w0 ? wsubs ,
so we correct almost all the error, hence reducing the bias.
Input: X, Y, r
Output: w
b
?1 >
w
bF S = (X>
Xsubs Ysubs ;
subs Xsubs )
Rrem = Yrem ? Xrem ? w
bF S ;
n
>
?1 >
w
bcorrect = nsubs
?
(X
Xrem Rrem ;
subs Xsubs )
rem
w
bU luru = w
bF S + w
bcorrect ;
return w
b=w
bU luru ;
Algorithm 1: Uluru Algorithm
4
Theory
In this section we provide the theoretical guarantees of the three algorithms we discussed in the
previous sections in the fixed as well as random design setting. All the theorems assume OLS
setting as mentioned in Section 2. Without loss of generality we assume that X is whitened, i.e.
?X,X = Ip (see Supplementary Material for justification). For both the cases we bound the square
root of Mean Squared Error which becomes kw0 ? wk,
b as described in Section 2.
4.1
Fixed Design Setting
Here we assume preconditioning and subsampling with SRHT as described in previous sections.
(Please see the Supplementary Material for all the Proofs)
Theorem 1 Assume X ? n ? p and X> X = n ? Ip . Let Y = Xw0 + where ? n ? 1 is i.i.d.
gaussian noise with standard deviation ?.
If we use algorithm FS, then with failure probability at most 2 enp + 2?,
r
p
kw0 ? w
?F S k ? C? ln(nr + 1/?)
nr
(1)
Theorem 2 Assuming our data comes from the same model as Theorem 1 and we use CovS, then
with failure probability at most 3? + 3 enp ,
s
!
r
r
2p p
2p
p
p
kw0 ?w
?CovS k ? (1?r) C1 ln( )
+ C2 ln( )
kw0 k+C3 ? log(n + 1/?)
? nr
? n(1 ? r)
n
(2)
2
Uluru is a rock that is shaped like a quadratic and is solid. So, if your estimate of the quadratic term is as
solid as Uluru, you do not need use more data to make it more accurate.
4
Theorem 3 Assuming our data comes from the same model as Theorem 1 and we use Uluru, then
with failure probability at most 5? + 5 enp ,
s
!
r
r
p
2p p
2p
p
kw0 ? w
?U luru k ? ? ln(nr + 1/?)
C1 ln( )
+ C2 ln( )
nr
? nr
? n(1 ? r)
r
p
+?C3 ln(n(1 ? r) + 1/?) ?
n(1 ? r)
Remark 1 The probability enp becomes really small for large p, hence it can be ignored and the
ln terms can be viewed as constants. Lets consider the case nsubs nrem , since only in this
situation subsampling reduces computational cost significantly. Then, keeping only the dominating
terms, the result of the above three theorems can be summarized
p pas: With some failure probability
less than some fixed number, the error of FS algorithm is O(? nr
), the error of CovS algorithm is
pp
p
p
p
O( nr
kwk + ? np ) and the error of Uluru algorithm is O(? nr
+ ? np )
4.2
4.2.1
Sub-gaussian Random Design Setting
Definitions
The following two definitions from [10] characterize what it means to be sub-gaussian.
Definition 1 A random variable X is sub-gaussian with sub-gaussian norm kXk?2 if and only if
?
(E|X|p )1/p ? kXk?2 p
for all p ? 1
(3)
Here kXk?2 is the minimal constant for which the above condition holds.
Definition 2 A random vector X ? Rn is sub-gaussian if the one dimensional marginals x> X are
sub-gaussian for all x ? Rn . The sub-gaussian norm of random vector X is defined as
kXk?2 = sup kx> Xk?2
(4)
kxk2 =1
Remark 2 Since the sum of two sub-gaussian variables is sub-gaussian, it is easy to conclude that
a random vector X = (X1 , ..Xp )> is a sub-gaussian random vector when the components X1 , ..Xp
are sub-gaussian variables.
4.2.2
Sub-gaussian Bounds
Under the assumption that the rows of the design matrix X are i.i.d draws for a p dimensional
sub-Gaussian random vector X with ?XX = Ip , we have the following bounds (Please see the
Supplementary Material for all the Proofs):
Theorem 4 If we use the FS algorithm, then with failure probability at most ?,
r
p ? ln(2p/?)
kw0 ? w
bF S k ? C?
nr
Theorem 5 If we use the CovS algorithm, then with failure probability at most ?,
!
r
r
p
p
kw0 ? w
bCovS k ? (1 ? r) C1
+ C2
kw0 k
n?r
n(1 ? r)
r
p ? ln(2(p + 2)/?)
+ C3 ?
n
5
(5)
(6)
Theorem 6 If we use Uluru, then with failure probability at most ?,
" r
#
r
r
p ? ln(2(2p + 2)/?)
p
p
kw0 ? w
bU luru k ? C1 ?
C2
+ C3
n?r
n?r
(1 ? r) ? n
s
p ? ln(2(2p + 2)/?)
+ C4 ?
(1 ? r) ? n
Remark 3 Here also, the ln terms can be viewed as constants. Consider the case r 1, since this
is the only case where subsampling reduces computational cost significantly. Keeping only dominating terms, the result of the above three theorems can be summarized
p p as: With failure probability less
than some fixed number, the error of the FS algorithm is O(? rn
), the error of the CovS algorithm
pp
p
p
p
is O( rn
kwk + ? np ) and the error of the Uluru algorithm is O(? rn
+ ? np ). These errors are
exactly the same as in the fixed design case.
4.3
Discussion
We can make a few salient observations from the error expressions for the algorithms presented in
Remarks 1 & 3.
The second term for the error of the Uluru algorithm does not contain r at all. If it is the dominating
term, which is the case if
p
(7)
r > O( p/n)
pp
then the error of Uluru is approximately O(? n ), which is completely independent of r. Thus, if
r is not too small (i.e., when Eq. 7 holds), the error bound for Uluru is not a function of r. In other
words, when Eq. 7 holds, we do not increase the error by using less data in estimating the covariance
matrix in Uluru. FS Algorithm does not have this property since its error is proportional to ?1r .
Similarly, for the CovS algorithm, when
r > O(
kw0 k2
)
?2
(8)
the second term dominates and we can conclude that the error does not change with r. However,
Eq. 8 depends on how large the standard deviation
? of the noise is. We can assume kw0 k2 = O(p)
?
since it is p dimensional. Hence if ? ? O( p), Eq. 8 fails since it implies r > O(1) and the error
bound of CovS algorithm increases with r.
To sum this up, Uluru has the nice property that its error bound does not increase as r gets smaller
as long as r is greater than a threshold. This threshold is completely independent of how noisy the
data is and only depends on the characteristics of the design/data matrix (n, p).
4.4
Run Time complexity
Table 1 summarizes the run time complexity and theoretically predicted error bounds for all the
methods. We use these theoretical run times (FLOPS) in our plots.
5
Experiments
In this section we elucidate the relative merits of our methods by comparing their empirical performance on both synthetic and real world datasets.
5.1
Methodology
We can compare our algorithms by allowing them each to have about O(np) CPU time (ignoring
the
p log factors). This is of order the same time as it takes to read the data. Our target accuracy is
p/n, namely what a full least squares algorithm would generate. We will assume n p. The
6
Methods
Methods
OLS
FS
CovS
Uluru
SRHT-FS
SRHT-CovS
SRHT-Uluru
Running Time
O(FLOPS)
O(n p2 )
O(nsubs p2 )
O(nsubs p2 + n p)
O(nsubs p2 + n p)
O(max(n p log(p), nsubs p2 ))
O(max(n p log(p), nsubs p2 + n p))
O(max(n p log(p), nsubs p2 + n p))
Error
bound
p
O(pp/n)
O( p/nsubs )
* p
O(pp/n)
O( p2 /n)
* p
O( p/n)
Table 1: Runtime complexity. nsubs is the number of observations in the subsample, n is the number of
observations, and p is the number of predictors. * indicates that no uniform error bounds are known.
subsample size,
p nsubs , for FS should be O(n/p) to keep the CPU time O(np), which leads to an
accuracy of p2 /n. For the CovS method, the accuracy
p depends on how noisy our data is (i.e. how
big ? is). When ? is large, it performs as p
well as p/n, which is the same as full least squares.
When ? is small, it performs as poorly as p2 /n. For Uluru, to keep the CPU time O(np), nsubs
should be O(n/p)
p or equivalently r = O(1/p). As stated in the discussions after the theorems,
when r ? O( pp/n) (in this set up we want r = O(1/p), which implies n ? O(p3 )), Uluru has
error bound O( p/n) no matter what signal noise ratio the problem has.
5.2
Synthetic Datasets
We generated synthetic data by distributing the signal uniformly across all the p singular values,
picking the p singular values to be ?i = 1/i2 , i = 1 : p, and further varying the amount of signal.
5.3
Real World Datasets
We also compared the performance of the algorithms on two UCI datasets 3 : CPUSMALL (n=8192,
p=12) and CADATA (n=20640, p=8) and the PERMA sentiment analysis dataset described in [11]
(n=1505, p=30), which uses LR-MVL word embeddings [12] as features. 4
5.4
Results
The results for synthetic data are shown in Figure 1 (top row) and for real world datasets are also
shown in Figure 1 (bottom row).
To generate the plots, we vary the amount of data used in the subsampling, nsubs , from 1.1p to n.
For FS, this simply means using a fraction of the data; for CovS and Uluru, only the data for the
covariance matrix is subsampled. We report the Mean Squared Error (MSE), which in the case of
squared loss is the same as the risk, as was described in Section 2. For the real datasets we do not
know the true population parameter, w0 , so we replace it with its consistent estimator wM LE , which
is computed using standard OLS on the entire dataset.
The horizontal gray line in the figures is the overfitting point; it is the error generated by w
? vector of
all zeros. The vertical gray line is the n ? p point; thus anything which is faster than that must look
at only some of the data.
Looking at the results, we can see two trends for the synthetic data. Firstly, our algorithms with
no preconditioning are much faster than their counterparts with preconditioning and give similar
accuracies. Secondly, as we had expected, CovS performs best for high noise setting being slightly
better than Uluru, and Uluru is significantly better for low noise setting.
For real world datasets also, Uluru is almost always better than the other algorithms, both with and
without preconditioning. As earlier, the preconditioned alternatives are slower.
3
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/regression.html
We also compared our approaches against coordinate ascent methods from [13] and our algorithms outperform them. Due to paucity of space we relegated that comparison to the supplementary material.
4
7
2.00 5.00
1e+06
1e+03
0.02 0.05
10
0.20 0.50
2.00 5.00
# FLOPS/n*p
1e+04
1e+01
MSE/Risk
1e?05
MSE/Risk
5
# FLOPS/n*p
2.00 5.00
1e?05 1e?02 1e+01 1e+04 1e+07
1e+05
1e+03
MSE/Risk
1e+01
2
0.20 0.50
# FLOPS/n*p
1e?01
1
1e+00
MSE/Risk
0.02 0.05
# FLOPS/n*p
1e?02
0.20 0.50
1e?03
1e?01
1e+01
MSE/Risk
1e+03
1e+05
1e+06
1e+04
MSE/Risk
1e+02
1e+00
0.02 0.05
0.02
0.05
0.20
0.50
# FLOPS/n*p
2.00
5.00
5e?03
5e?02
5e?01
5e+00
# FLOPS/n*p
Figure 1: Results for synthetic datasets (n=4096, p=8) in top row and for (PERMA, CPUSMALL,qCADATA,
left-right) bottom row. The three columns in the top row have different amounts of signal, 2,
n
p
and
n
p
respectively. In all settings, we varied the amount of subsampling from 1.1 p to n in multiples of 2. Color
scheme: + (Green)-FS, + (Blue)-CovS, + (Red)-Uluru. The solid lines indicate no preconditioning (i.e.
random design) and dashed lines indicate fixed design with Randomized Hadamard preconditioning. The
FLOPS reported are the theoretical values (see Supp. material), the actual values were noisy due to varying
load settings on CPUs.
6
Related Work
The work that comes closest to our work is the set of approaches which precondition the matrix by
either Subsampled Randomized Hadamard Transform (SRHT) [1, 4], or Subsampled Randomized
Fourier Transform (SRFT) [2, 3], before subsampling uniformly from the resulting transformed
matrix.
However, this line of work is different our work in several ways. They are doing their analysis in a
mathematical set up, i.e. solving an overdetermined linear system (w
? = arg minw?Rp kXw ? Y k2 ),
while we are working in a statistical set up (a regression problem Y = X? + ) which leads to
different error analysis.
Our FS algorithm is essentially the same as the subsampling algorithm proposed by [4]. However,
our theoretical analysis of it is novel, and furtheremore they only consider it in fixed design setting
with Hadamard preconditioning.
The CovS and Uluru are entirely new algorithms and as we have seen differ from FS in a key sense,
namely that CovS and Uluru make use of all the data but FS uses only a small proportion of the data.
7
Conclusion
In this paper we proposed three subsampling methods for faster least squares regression. All three
run in O(size of input) = np. Our best method, Uluru, gave an error bound which is independent of
the amount of subsampling as long as it is above a threshold.
Furthermore, we argued that for problems arising from linear regression, the Randomized Hadamard
transformation is often not needed. In linear regression, observations are generally i.i.d. If one further assumes that they are sub-Gaussian (perhaps as a result of a preprocessing step, or simply
because they are 0/1 or Gaussian), then subsampling methods without a Randomized Hadamard
transformation suffice. As shown in our experiments, dropping the Randomized Hadamard transformation significantly speeds up the algorithms and in i.i.d sub-Gaussian settings, does so without
loss of accuracy.
8
References
[1] Boutsidis, C., Gittens, A.: Improved matrix algorithms via the subsampled randomized
hadamard transform. CoRR abs/1204.0062 (2012)
[2] Tygert, M.: A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations. CoRR abs/0905.4745 (2009)
[3] Rokhlin, V., Tygert, M.: A fast randomized algorithm for overdetermined linear least-squares
regression. Proceedings of the National Academy of Sciences 105(36) (September 2008)
13212?13217
[4] Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarl?os, T.: Faster least squares approximation. CoRR abs/0710.1435 (2007)
[5] Mahoney, M.W.: Randomized algorithms for matrices and data. (April 2011)
[6] Ailon, N., Chazelle, B.: Approximate nearest neighbors and the fast johnson-lindenstrauss
transform. In: STOC. (2006) 557?563
[7] Avron, H., Maymounkov, P., Toledo, S.: Blendenpik: Supercharging lapack?s least-squares
solver. SIAM J. Sci. Comput. 32(3) (April 2010) 1217?1236
[8] Vershynin, R.: How Close is the Sample Covariance Matrix to the Actual Covariance Matrix?
Journal of Theoretical Probability 25(3) (September 2012) 655?686
[9] Golub, G.H., Van Loan, C.F.: Matrix Computations (Johns Hopkins Studies in Mathematical
Sciences)(3rd Edition). 3rd edn. The Johns Hopkins University Press (October 1996)
[10] Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. CoRR
abs/1011.3027 (2010)
[11] Dhillon, P.S., Rodu, J., Foster, D., Ungar, L.: Two step cca: A new spectral method for
estimating vector models of words. In: Proceedings of the 29th International Conference on
Machine learning. ICML?12 (2012)
[12] Dhillon, P.S., Foster, D., Ungar, L.: Multi-view learning of word embeddings via cca. In:
Advances in Neural Information Processing Systems (NIPS). Volume 24. (2011)
[13] Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss
minimization. CoRR abs/1209.1873 (2012)
9
| 5105 |@word version:1 norm:3 proportion:1 bf:11 covariance:14 decomposition:1 mention:1 solid:3 initial:1 contains:1 comparing:1 chazelle:1 must:1 john:2 plot:2 oldest:1 xk:1 lr:1 provides:1 completeness:1 firstly:1 zhang:1 mathematical:2 constructed:1 c2:4 fitting:2 theoretically:1 upenn:3 expected:1 growing:1 multi:1 rem:12 cpu:4 actual:2 solver:1 becomes:4 estimating:4 underlying:1 notation:4 xx:8 suffice:1 what:3 kind:1 proposing:1 transformation:4 guarantee:1 avron:1 runtime:1 exactly:2 k2:3 before:4 approximately:1 studied:1 walsh:1 lyle:1 implement:1 empirical:2 significantly:4 projection:1 word:6 get:4 close:1 risk:8 influence:1 www:1 dean:1 estimator:5 hd:1 century:1 srht:10 population:2 coordinate:2 justification:1 elucidate:1 target:1 edn:1 us:4 overdetermined:2 origin:1 pa:2 trend:1 expensive:1 econometrics:1 bottom:2 csie:1 precondition:3 rescaled:1 mentioned:2 intuition:1 transforming:2 complexity:3 covariates:2 solving:2 completely:2 preconditioning:16 drineas:1 differently:1 lu2:1 muthukrishnan:1 fast:7 describe:2 shalev:1 sarl:1 whose:2 widely:1 solve:1 valued:1 supplementary:5 dominating:3 statistic:2 nrem:2 transform:13 noisy:3 final:1 ip:3 rock:1 propose:2 remainder:1 uci:1 hadamard:19 uniformize:1 poorly:1 academy:1 qr:1 leave:1 fixing:1 nearest:1 school:1 eq:4 p2:10 sa:1 predicted:1 involves:1 come:3 implies:2 indicate:2 differ:1 correct:2 stochastic:1 happily:1 libsvmtools:1 material:6 require:1 argued:1 ungar:3 suffices:1 really:1 preliminary:1 ntu:1 secondly:1 underdetermined:1 hold:3 normal:1 sought:1 vary:2 estimation:6 weighted:1 minimization:1 gaussian:24 always:2 varying:3 np2:1 focus:1 indicates:1 baseline:1 sense:1 dependent:1 entire:1 rhd:1 relegated:1 transformed:3 lapack:1 arg:1 dual:1 html:1 wharton:2 field:1 having:2 shaped:1 sampling:3 biology:1 look:1 icml:1 np:9 report:1 few:1 national:1 subsampled:12 floating:2 replaced:1 replacement:1 ab:5 golub:1 mahoney:2 predominant:1 behind:2 accurate:1 xy:5 minw:1 theoretical:8 minimal:2 instance:1 column:1 earlier:2 ordinary:2 cost:2 deviation:2 entry:1 cpusmall:2 uniform:2 predictor:2 johnson:1 too:2 characterize:1 reported:1 synthetic:8 vershynin:2 international:1 randomized:21 siam:1 bu:4 physic:1 picking:1 hopkins:2 squared:4 hn:7 xw0:2 return:1 supp:1 bold:1 wk:2 summarized:2 matter:1 depends:4 root:1 view:1 analyze:1 kwk:2 sup:1 wm:1 red:1 doing:1 contribution:2 square:10 publicly:1 accuracy:7 variance:1 characteristic:2 bidiagonalization:1 iid:1 worth:2 homoskedastic:1 definition:4 failure:8 against:1 boutsidis:1 frequency:1 pp:6 proof:2 sampled:1 dataset:2 color:1 organized:1 carefully:1 back:1 methodology:1 response:2 improved:1 april:2 done:1 generality:1 furthermore:1 just:1 stage:8 preconditioner:1 hand:1 working:1 horizontal:1 web:1 o:1 perhaps:1 gray:2 contain:2 true:3 unbiased:1 counterpart:1 normalized:1 hence:5 read:1 dhillon:3 paramveer:1 i2:1 please:2 enp:4 anything:1 smear:1 workhorse:1 performs:3 novel:3 ols:11 common:2 conditioning:1 volume:1 discussed:1 marginals:1 rd:2 consistency:1 similarly:1 had:1 longer:1 tygert:2 something:1 closest:1 recent:1 seen:2 minimum:1 greater:1 recommended:1 dashed:1 signal:5 multiple:1 full:5 infer:1 reduces:2 faster:6 long:3 ofp:1 regression:10 whitened:1 essentially:1 c1:4 background:1 want:2 singular:2 extra:1 ascent:2 subgaussian:1 leverage:5 noting:2 easy:1 embeddings:2 variety:1 gave:1 pennsylvania:1 mvl:1 idea:1 expression:1 distributing:1 sentiment:1 f:17 remark:4 ignored:1 generally:2 se:2 involve:2 amount:9 generate:2 http:1 outperform:1 sign:1 arising:1 pace:1 blue:1 diverse:1 write:1 dropping:1 key:1 salient:1 threshold:5 fraction:4 sum:3 run:6 letter:2 you:1 almost:3 wu:1 p3:1 draw:2 summarizes:1 entirely:1 bound:17 cca:2 followed:1 quadratic:2 your:1 software:1 rodu:1 sake:1 generates:1 fourier:4 speed:1 ailon:1 combination:1 smaller:2 across:2 slightly:1 gittens:1 tw:1 outlier:1 taken:1 ln:13 equation:2 kw0:11 cjlin:1 needed:2 know:2 merit:1 available:1 operation:2 spectral:1 covs:16 alternative:1 slower:1 rp:1 original:1 top:3 remaining:3 subsampling:30 ensure:1 running:2 assumes:1 paucity:1 giving:1 classical:1 already:1 diagonal:1 nr:10 september:2 sci:1 nsubs:20 w0:15 preconditioned:1 assuming:2 ratio:3 equivalently:1 october:1 stoc:1 stated:1 design:41 allowing:1 vertical:1 observation:17 datasets:10 flop:11 situation:1 ever:1 looking:2 rn:6 varied:1 omission:1 arbitrary:1 kxw:1 namely:2 c3:4 c4:1 unduly:1 toledo:1 nip:1 address:1 below:3 max:3 green:1 regularized:1 scheme:1 numerous:1 philadelphia:1 uluru:28 nice:2 relative:1 asymptotic:1 loss:6 srft:2 proportional:1 h2:1 xp:2 consistent:1 foster:3 row:10 keeping:2 bias:2 allow:1 neighbor:1 characterizing:1 tracing:1 distributed:1 van:1 world:6 lindenstrauss:1 made:1 preprocessing:1 far:1 cadata:1 approximate:1 keep:3 overfitting:1 conclude:2 shwartz:1 table:2 ignoring:1 warranted:1 mse:7 domain:1 main:1 big:3 subsample:8 noise:6 edition:1 x1:2 slow:1 sub:29 fails:1 comput:1 kxk2:1 theorem:13 formula:1 load:1 dominates:1 corr:5 ci:1 kx:1 yichao:1 simply:3 expressed:1 kxk:4 identity:1 viewed:2 replace:1 change:1 loan:1 except:2 uniformly:5 reducing:1 formally:2 rokhlin:1 yichaolu:1 ex:1 |
4,538 | 5,106 | Faster Ridge Regression via the Subsampled Randomized
Hadamard Transform
Yichao Lu1 Paramveer S. Dhillon2 Dean Foster1
Lyle Ungar2
1
2
Statistics (Wharton School), Computer & Information Science
University of Pennsylvania, Philadelphia, PA, U.S.A
{dhillon|ungar}@cis.upenn.edu
[email protected], [email protected]
Abstract
We propose a fast algorithm for ridge regression when the number of features is
much larger than the number of observations (p n). The standard way to solve
ridge regression in this setting works in the dual space and gives a running time
of O(n2 p). Our algorithm Subsampled Randomized Hadamard Transform- Dual
Ridge Regression (SRHT-DRR) runs in time O(np log(n)) and works by preconditioning the design matrix by a Randomized Walsh-Hadamard Transform with a
subsequent subsampling of features. We provide risk bounds for our SRHT-DRR
algorithm in the fixed design setting and show experimental results on synthetic
and real datasets.
1
Introduction
Ridge Regression, which penalizes the `2 norm of the weight vector and shrinks it towards zero, is
the most widely used penalized regression method. It is of particular interest in the p > n case (p is
the number of features and n is the number of observations), as the standard ordinary least squares
regression (OLS) breaks in this setting. This setting is even more relevant in today?s age of ?Big
Data?, where it is common to have p n. Thus efficient algorithms to solve ridge regression are
highly desirable.
The current method of choice for efficiently solving RR is [19], which works in the dual space
and has a running time of O(n2 p), which can be slow for huge p. As the runtime suggests, the
bottleneck is the computation of XX> where X is the design matrix. An obvious way to speed
up the algorithm is to subsample the columns of X. For example, suppose X has rank k, if we
randomly subsample psubs of the p (k < psubs p ) features, then the matrix multiplication can be
performed in O(n2 psubs ) time, which is very fast! However, this speed-up comes with a big caveat.
If all the signal in the problem were to be carried in just one of the p features, and if we missed this
feature while sampling, we would miss all the signal.
A parallel and recently popular line of research for solving large scale regression involves using
some kind of random projections, for instance, transforming the data with a randomized Hadamard
transform [1] or Fourier transform and then uniformly sampling observations from the resulting
transformed matrix and estimating OLS on this smaller data set. The intuition behind this approach
is that these frequency domain transformations uniformlize the data and smear the signal across all
the observations so that there are no longer any high leverage points whose omission could unduly
influence the parameter estimates. Hence, a uniform sampling in this transformed space suffices.
This approach can also be viewed as preconditioning the design matrix with a carefully constructed
data-independent random matrix. This transformation followed by subsampling has been used in a
variety of variations, including Subsampled Randomized Hadamard Transform (SRHT) [4, 6] and
Subsampled Randomized Fourier Transform (SRFT) [22, 17].
1
In this paper, we build on the above line of research and provide a fast algorithm for ridge regression
(RR) which applies a Randomized Hadamard transform to the columns of the X matrix and then
samples psubs = O(n) columns. This allows the bottleneck matrix multiplication in the dual RR to
be computed in O(np log(n)) time, so we call our algorithm Subsampled Randomized Hadamard
Transform-Dual Ridge Regression (SRHT-DRR).
In addition to being computationally efficient, we q
also prove that in the fixed design setting SRHT-
DRR only increases the risk by a factor of (1 + C
w.r.t. the true RR solution.
1.1
k
psubs )
(where k is the rank of the data matrix)
Related Work
Using randomized algorithms to handle large matrices is an active area of research, and has been
used in a variety of setups. Most of these algorithms involve a step that randomly projects the original large matrix down to lower dimensions [9, 16, 8]. [14] uses a matrix of i.i.d Gaussian elements
to construct a preconditioner for least square which makes the problem well conditioned. However,
computing a random projection is still expensive as it requires multiplying a huge data matrix by
another random dense matrix. [18] introduced the idea of using structured random projection for
making matrix multiplication substantially faster.
Recently, several randomized algorithms have been developed for kernel approximation. [3] provided a fast method for low rank kernel approximation by randomly selecting q samples to construct
a rank q approximation of the original kernel matrix. Their approximation can reduce the cost to
O(nq 2 ). [15] introduced a random sampling scheme to approximate symmetric kernels and [12]
accelerates [15] by applying Hadamard Walsh transform. Although our paper and these papers can
all be understood from a kernel approximation point of view, we are working in the p n 1
case while they focus on large n.
Also, it is worth distinguishing our setup from standard kernel learning. Kernel methods enable the
learning models to take into account a much richer feature space than the original space and at the
same time compute the inner product in these high dimensional space efficiently. In our p n 1
setup, we already have a rich enough feature space and it suffices to consider the linear kernel
XX> 1 . Therefore, in this paper we propose a randomized scheme to reduce the dimension of X
and accelerate the computation of XX> .
2
Faster Ridge Regression via SRHT
In this section we firstly review the traditional solution of solving RR in the dual and it?s computational cost. Then we introduce our algorithm SRHT-DRR for faster estimation of RR.
2.1
Ridge Regression
Let X be the n ? p design matrix containing n i.i.d. samples from the p dimensional independent
variable (a.k.a. ?covariates? or ?predictors?) X such that p n. Y is the real valued n ? 1
response vector which contains n corresponding values of the dependent variable Y . is the n ? 1
homoskedastic noise vector with common variance ? 2 . Let ??? be the solution of the RR problem,
i.e.
1
??? = arg min kY ? X?k2 + ?k?k2
(1)
??p?1 n
The solution to Equation (1) is ??? = (X> X + n?Ip )?1 X> Y . The step that dominates the computational cost is the matrix inversion which takes O(p3 ) flops and will be extremely slow when
p n 1. A straight forward improvement to this is to solve the Equation (1) in the dual space.
By change of variables ? = X> ? where ? ? n ? 1 and further letting K = XX> the optimization
problem becomes
1
?
? ? = arg min kY ? K?k2 + ??> K?
(2)
??n?1 n
1
For this reason, it is standard in natural language processing applications to just use linear kernels.
2
and the solution is ?
? ? = (K + n?In )?1 Y which directly gives ??? = X> ?
? ? . Please see [19] for a
detailed derivation of this dual solution. In the p n case the step that dominates computational
cost in the dual solution is computing the linear kernel matrix K = XX> which takes O(n2 p) flops.
This is regarded as the computational cost of the true RR solution in our setup.
Since our algorithm SRHT-DRR uses Subsampled Randomized Hadamard Transform (SRHT),
some introduction to SRHT is warranted.
2.2
Definition and Properties of SRHT
Following [20], for p = 2q where q is any positive integer, a SRHT can be defined as a psubs ? p
(p > psubs ) matrix of the form:
r
p
?=
RHD
psubs
where
? R is a random psubs ? p matrix the rows of which are psubs uniform samples (without
replacement) from the standard basis of Rp .
? H ? Rp?p is a normalized Walsh-Hadamard
matrix. The Walsh-Hadamard
matrix of size
Hp/2 Hp/2
+1 +1
with H2 =
.
p ? p is defined recursively: Hp =
Hp/2 ?Hp/2
+1 ?1
H = ?1p Hp is a rescaled version of Hp .
? D is a p ? p diagonal matrix and the diagonal elements are i.i.d. Rademacher random
variables.
There are two key features that makes SRHT a nice candidate for accelerating RR when p n.
Firstly, due to the recursive structure of the H matrix, it takes only O(p log(psubs )) FLOPS to
compute ?v where v is a generic p ? 1 dense vector while for arbitrary unstructured psubs ? p
dense matrix A, the cost for computing Av is O(psubs p) flops. Secondly, after projecting any
matrix W ? p ? k with orthonormal columns down to low dimensions with SRHT, the columns of
?W ? psubs ? k are still about orthonormal. The following lemma characterizes this property:
Lemma 1. Let W be an p ? k (p > k) matrix where W> W = Ik . Let ? be a psubs ? p SRHT
matrix where p > psubs > k. Then with probability at least 1 ? (? + epk ),
s
c log( 2k
>
? )k
(3)
k(?W) ?W ? Ik k2 ?
psubs
The bound is in terms of the spectral norm of the matrix. The proof of this lemma is in the Appendix.
The tools for the random matrix theory part of the proof come from [20] and [21]. [10] also provided
similar results.
2.3
The Algorithm
Our fast algorithm for SRHT-DRR is described below:
SRHT-DRR
Input: Dataset X ? n ? p, response Y ? n ? 1, and subsampling size psubs .
Output: The weight parameter ? ? psubs ? 1.
? Compute the SRHT of the data: XH = X?> .
? Compute KH = XH X>
H
? Compute ?H,? = (KH + n?In )?1 Y , which is the solution of Equation (2) obtained by
replacing K with KH .
? Compute ?H,? = X>
H ?H,?
3
Since, SRHT is only defined for p = 2q for any integer q, so, if the dimension p is not a power of 2,
we can concatenate a block of zero matrix to the feature matrix X to make the dimension a power
of 2.
Remark 1. Let?s look at the computational cost of SRHT-DRR. Computing XH takes
O(np log(psubs )) FLOPS [2, 6]. Once we have XH , computing ?H,? costs O(n2 psubs ) FLOPS,
with the dominating step being computing KH = XH X>
H . So the computational cost for computing ?H,? is O(np log(psubs ) + n2 psubs ), compared to the true RR which costs O(n2 p). We will
discuss how large psubs should be later after stating the main theorem.
3
Theory
In this section we bound the risk of SRHT-DRR and compare it with the risk of the true dual ridge
estimator in fixed design setting.
As earlier, let X be an arbitrary n ? p design matrix such that p n. Also, we have Y = X? + ,
where is the n ? 1 homoskedastic noise vector with common mean 0 and variance ? 2 . [5] and [3]
did similar analysis for the risk of RR under similar fixed design setups.
Firstly, we provide a corollary to Lemma 1 which will be helpful in the subsequent theory.
Corollary 1. Let k be the rank of X. With probability at least 1 ? (? +
p
)
ek
(1 ? ?)K KH (1 + ?)K
where ? = C
q
k log(2k/?)
.
psubs
(4)
( as for p.s.d. matrices G L means G ? L is p.s.d.)
Proof. Let X = UDV> be the SVD of X where U ? n ? k, V ? p ? k has orthonormal
columns and D ? k ? k is diagonal. Then KH = UD(V> ??V)DU> . Lemma 1 directly implies
Ik (1 ? ?) (V> ??V) Ik (1 + ?) with probability at least 1 ? (? + epk ). Left multiply UD
and right multiply DU> to the above inequality complete the proof.
3.1
Risk Function for Ridge Regression
Let Z = E (Y ) = X?. The risk for any prediction Y? ? n ? 1 is n1 E kY? ? Zk2 .
For any n ? n positive symmetric definite matrix M, define the following risk function.
R(M) =
?2
Tr[M2 (M + n?In )?2 ] + n?2 Z > (M + n?In )?2 Z
n
(5)
Lemma 2. Under the fixed design setting, the risk for the true RR solution is R(K) and the risk for
SRHT-DRR is R(KH ).
4
Proof. The risk of the SRHT-DRR estimator is
1
E kKH ?H,? ? Zk2
n
=
=
=
=
=
1
E kKH (KH + n?In )?1 Y ? Zk2
n
1
E kKH (KH + n?In )?1 Y ? E (KH (KH + n?In )?1 Y )k2
n
1
+ kE (KH (KH + n?In )?1 Y ) ? Zk2
n
1
E kKH (KH + n?In )?1 k2
n
1
+ k(KH (KH + n?In )?1 Z ? Zk2
n
1
Tr[K2H (KH + n?In )?2 > ]
n
1
+ Z > (In ? KH (KH + n?In )?1 )2 Z
n
?2
Tr[K2H (KH + n?In )?2 ]
n
+n?2 Z > (KH + n?In )?2 Z
(6)
Note that the expectation here is only over the random noise and it is conditional on the Randomized Hadamard Transform. The calculation is the same for the ordinary estimator. In the risk
function, the first term is the variance and the second term is the bias.
3.2
Risk Inflation Bound
The following theorem bounds the risk inflation of SRHT-DRR compared with the true RR solution.
Theorem 1. Let k be the rank of the X matrix. With probability at least 1 ? (? + epk )
R(KH ) ? (1 ? ?)?2 R(K)
where ? = C
q
(7)
k log(2k/?)
psubs
Proof. Define
= n?2 Z > (M + n?In )?2 Z
?2
Tr[K2H (KH + n?In )?2 ]
V (M) =
n
B(M)
for any p.s.d matrix M ? n ? n. Therefore, R(M) = V (M) + B(M). Now, due to [3] we know
that B(M) is non-increasing in M and V (M) is non-decreasing in M. When Equation(4) holds,
R(KH )
=
?
V (KH ) + B(KH )
V ((1 + ?)K) + B((1 ? ?)K)
?
(1 + ?)2 V (K) + (1 ? ?)?2 B(K)
?
(1 ? ?)?2 (V (K) + B(K))
=
(1 ? ?)?2 R(K)
Remark 2. Theorem 1 gives us an idea of how large psubs should be. Assuming ? (the risk inflation
ratio) is fixed, we get psubs = C k log(2k/?)
= O(k). If we further assume that X is full rank, i.e.
?2
k = n, then, it suffices to choose psubs = O(n). Combining this with Remark 1, we can see
that the cost of computing XH is O(np log(n)). Hence, under the ideal setup where p is huge so
that the dominating step of SRHT-DRR is computing XH , the computational cost of SRHT-DRR
O(np log(n)) FLOPS.
5
Comparison with PCA Another way to handle high dimensional features is to use PCA and run
regression only on the top few principal components (this procedure is called PCR), as illustrated by
[13] and many other papers. RR falls in the family of ?shrinkage? estimators as it shrinks the weight
parameter towards zero. On the other hand, PCA is a ?keep-or-kill? estimator as it kills components
with smaller eigenvalues. Recently, [5] have shown that the risk of PCR and RR are related and that
the risk of PCR is bounded by four times the risk of RR. However, we believe that both PCR and
RR are parallel approaches and one can be better than the other depending on the structure of the
problem, so it is hard to compare SRHT-DRR with PCR theoretically.
Moreover, PCA under our p n 1 setup is itself a non-trivial problem both statistically and
computationally. Firstly, in the p n case we do not have enough samples to estimate the huge
p ? p covariance matrix. Therefore the eigenvectors of the sample covariance matrix obtained by
PCA maybe very different from the truth. (See [11] for a theoretical study on the consistency of the
principal directions for the high p low n case.) Secondly, PCA requires one to compute an SVD of
the X matrix, which is extremely slow when p n 1. An alternative is to use a randomized
algorithm such as [16] or [9] to compute PCA. Again, whether randomized PCA is better than our
SRHT-DRR algorithm depends on the problem. With that in mind, we compare SRHT-DRR against
standard as well as Randomized PCA in our experiments section; We find that SRHT-DRR beats
both of them in speed as well as accuracy.
4
Experiments
In this section we show experimental results on synthetic as well as real-world data highlighting
the merits of SRHT, namely, lower computational cost compared to the true Ridge Regression (RR)
solution, without any significant loss of accuracy. We also compare our approach against ?standard?
PCA as well as randomized PCA [16].
In all our experiments, we choose the regularization constant ? via cross-validation on the training
set. As far as PCA algorithms are concerned, we implemented standard PCA using the built in SVD
function in MATLAB and for randomized PCA we used the block power iteration like approach
proposed by [16]. We always achieved convergence in three power iterations of randomized PCA.
4.1
Measures of Performance
Since we know the true ? which generated the synthetic data, we report MSE/Risk for the fixed
design setting (they are equivalent for squared loss) as measure of accuracy. It is computed as
kY? ? X?k2 , where Y? is the prediction corresponding to different methods being compared. For
real-world data we report the classification error on the test set.
In order to compare the computational cost of SHRT-DRR with true RR, we need to estimate the
number of FLOPS used by them. As reported by other papers, e.g. [4, 6], the theoretical cost of
applying Randomized Hadamard Transform is O(np log(psubs )). However, the MATLAB implementation we used took about np log(p) FLOPS to compute XH . So, for SRHT-DRR, the total
computational cost is np log(p) for getting XH and a further 2n2 psubs FLOPS to compute KH . As
mentioned earlier, the true dual RR solution takes ? 2n2 p. So, in our experiments, we report relative
computational cost which is computed as the ratio of the two.
Relative Computational Cost =
4.2
p log(p) ? n + 2n2 psubs
2n2 p
Synthetic Data
We generated synthetic data with p = 8192 and varied the number of observations n = 20, 100, 200.
We generated a n ? n matrix R ? M V N (0, I) where MVN(?, ?) is the Multivariate Normal
Distribution with mean vector ?, variance-covariance matrix ? and ?j ? N (0, 1) ?j = 1, . . . , p.
The final X matrix was generated by rotating R with a randomly generated n ? p rotation matrix.
Finally, we generated the Ys as Y = X? + where i ? N (0, 1) ?i = 1, . . . , n.
6
2.6
True RR Solution
PCA
Randomized PCA
2.5
True RR Solution
PCA
Randomized PCA
2.4
2.2
2.2
2
True RR Solution
PCA
Randomized PCA
2
1.8
MSE/Risk
MSE/Risk
MSE/Risk
2
1.5
1.8
1.6
1.4
1.6
1.4
1.2
1
1.2
1
1
0.5
0.8
0.8
0.6
0.329 0.331 0.337 0.349 0.386 0.417 0.447 0.471 0.508 0.569
Relative Computational Cost
0.08
0.083 0.089 0.126 0.157 0.187 0.211 0.248 0.279 0.309
Relative Computational Cost
0.6
0.059 0.063 0.069 0.094 0.124 0.155 0.179 0.216 0.246 0.277
Relative Computational Cost
Figure 1: Left to right n=20, 100, 200. The boxplots show the median error rates for SRHT-DRR
for different psubs . The solid red line is the median error rate for the true RR using all the features.
The green line is the median error rate for PCR when PCA is computed by SVD in MATLAB. The
black dashed line is median error rate for PCR when PCA is computed by randomized PCA.
For PCA and randomized PCA, we tried keeping r PCs in the range 10 to n and finally chose the
value of r which gave the minimum error on the training set. We tried 10 different values for psubs
from n + 10 to 2000 . All the results were averaged over 50 random trials.
The results are shown in Figure 1. There are two main things worth noticing. Firstly, in all the cases,
SRHT-DRR gets very close in accuracy to the true RR with only ? 30% of its computational cost.
SRHT-DRR also cost much fewer FLOPS than the Randomized PCA for our experiments. Secondly,
as we mentioned earlier, RR and PCA are parallel approaches. Either one might be better than the
other depending on the structure of the problem. As can be seen, for our data, RR approaches are
always better than PCA based approaches. We hypothesize that PCA might perform better relative
to RR for larger n.
4.3
Real world Data
We took the UCI ARCENE dataset which has 200 samples with 10000 features as our real world
dataset. ARCENE is a binary classification dataset which consists of 88 cancer individuals and
112 healthy individuals (see [7] for more details about this dataset). We split the dataset into 100
training and 100 testing samples and repeated this procedure 50 times (so n = 100, p = 10000 for
this dataset). For PCA and randomized PCA, we tried keeping r = 10, 20, 30, 40, 50, 60, 70, 80, 90
PCs and finally chose the value of r which gave the minimum error on the training set (r = 30). As
earlier, we tried 10 different values for psubs : 150, 250, 400, 600, 800, 1000, 1200, 1600, 2000, 2500.
Standard PCA is known to be slow for this size datasets, so the comparison with it is just for accuracy. Randomized PCA is fast but less accurate than standard (?true?) PCA; its computational cost
for r = 30 can be approximately calculated as about 240np (see [9] for details), which in this case
is roughly the same as computing XX> (? 2n2 p).
The results are shown in Figure 2. As can be seen, SRHT-DRR comes very close in accuracy
to the true RR solution with just ? 30% of its computational cost. SRHT-DRR beats PCA and
Randomized PCA even more comprehensively, achieving the same or better accuracy at just ? 18%
of their computational cost.
5
Conclusion
In this paper we proposed a fast algorithm, SRHT-DRR, for ridge regression in the p n 1
setting SRHT-DRR preconditions the design matrix by a Randomized Walsh-Hadamard Transform
with a subsequent subsampling of features. In addition to being significantly faster than the true
dual ridge regression solution, SRHT-DRR only inflates the risk w.r.t. the true solution by a small
amount. Experiments on both synthetic and real data show that SRHT-DRR gives significant speeds
up with only small loss of accuracy. We believe similar techniques can be developed for other
statistical methods such as logistic regression.
7
True RR Solution
PCA
Randomized PCA
Classification Error
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.13
0.14
0.155 0.175 0.195 0.215 0.235 0.275 0.315 0.365
Relative Computational Cost
Figure 2: The boxplots show the median error rates for SRHT-DRR for different psubs . The solid
red line is the median error rate for the true RR using all the features. The green line is the median
error rate for PCR with top 30 PCs when PCA is computed by SVD in MATLAB. The black dashed
line is the median error rate for PCR with the top 30 PCs computed by randomized PCA.
References
[1] Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast johnsonlindenstrauss transform. In STOC, pages 557?563, 2006.
[2] Nir Ailon and Edo Liberty. Fast dimension reduction using rademacher series on dual bch
codes. Technical report, 2007.
[3] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. CoRR, abs/1208.2015,
2012.
[4] Christos Boutsidis and Alex Gittens. Improved matrix algorithms via the subsampled randomized hadamard transform. CoRR, abs/1204.0062, 2012.
[5] Paramveer S. Dhillon, Dean P. Foster, Sham M. Kakade, and Lyle H. Ungar. A risk comparison
of ordinary least squares vs ridge regression. Journal of Machine Learning Research, 14:1505?
1511, 2013.
[6] Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tam?s Sarl?s. Faster least
squares approximation. CoRR, abs/0710.1435, 2007.
[7] Isabelle Guyon. Design of experiments for the nips 2003 variable selection benchmark. 2003.
[8] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217?288,
May 2011.
[9] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for
the principal component analysis of large data sets. SIAM J. Scientific Computing, 33(5):2580?
2594, 2011.
[10] Daniel Hsu, Sham M. Kakade, and Tong Zhang. Analysis of a randomized approximation
scheme for matrix multiplication. CoRR, abs/1211.5414, 2012.
[11] S. Jung and J.S. Marron. PCA consistency in high dimension, low sample size context. Annals
of Statistics, 37:4104?4130, 2009.
[12] Quoc Le, Tamas Sarlos, and Alex Smola. Fastfood -approximating kernel expansions in loglinear time. ICML, 2013.
[13] W.F. Massy. Principal components regression in exploratory statistical research. Journal of the
American Statistical Association, 60:234?256, 1965.
[14] Xiangrui Meng, Michael A. Saunders, and Michael W. Mahoney. Lsrn: A parallel iterative
solver for strongly over- or under-determined systems. CoRR, abs/1109.5981, 2011.
8
[15] Ali Rahimi and Ben Recht. Random features for large-scale kernel machines. In In Neural
Infomration Processing Systems, 2007.
[16] Vladimir Rokhlin, Arthur Szlam, and Mark Tygert. A randomized algorithm for principal
component analysis. SIAM J. Matrix Analysis Applications, 31(3):1100?1124, 2009.
[17] Vladimir Rokhlin and Mark Tygert. A fast randomized algorithm for overdetermined linear
least-squares regression. Proceedings of the National Academy of Sciences, 105(36):13212?
13217, September 2008.
[18] Tamas Sarlos. Improved approximation algorithms for large matrices via random projections.
In In Proc. 47th Annu. IEEE Sympos. Found. Comput. Sci, pages 143?152. IEEE Computer
Society, 2006.
[19] G. Saunders, A. Gammerman, and V. Vovk. Ridge regression learning algorithm in dual variables. In Proc. 15th International Conf. on Machine Learning, pages 515?521. Morgan Kaufmann, San Francisco, CA, 1998.
[20] Joel A. Tropp. Improved analysis of the subsampled randomized hadamard transform. CoRR,
abs/1011.1595, 2010.
[21] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012.
[22] Mark Tygert. A fast algorithm for computing minimal-norm solutions to underdetermined
systems of linear equations. CoRR, abs/0905.4745, 2009.
9
| 5106 |@word trial:1 version:1 inversion:1 norm:3 tried:4 covariance:3 decomposition:1 tr:4 solid:2 recursively:1 reduction:1 contains:1 series:1 selecting:1 daniel:1 current:1 chazelle:1 concatenate:1 subsequent:3 hypothesize:1 v:1 fewer:1 nq:1 caveat:1 firstly:5 zhang:1 constructed:1 ik:4 prove:1 consists:1 introduce:1 theoretically:1 upenn:3 roughly:1 udv:1 decreasing:1 solver:1 increasing:1 becomes:1 provided:2 xx:6 estimating:1 project:1 bounded:1 moreover:1 kind:1 substantially:1 massy:1 developed:2 finding:1 transformation:2 friendly:1 runtime:1 k2:7 szlam:1 positive:2 understood:1 meng:1 approximately:1 black:2 chose:2 might:2 suggests:1 walsh:5 range:1 statistically:1 averaged:1 testing:1 lyle:2 recursive:1 block:2 definite:1 procedure:2 area:1 significantly:1 projection:4 get:2 close:2 selection:1 arcene:2 risk:24 influence:1 applying:2 context:1 equivalent:1 dean:2 sarlos:2 ke:1 unstructured:1 m2:1 estimator:5 regarded:1 orthonormal:3 srht:42 handle:2 variation:1 exploratory:1 annals:1 today:1 suppose:1 user:1 us:2 distinguishing:1 overdetermined:1 pa:1 element:2 expensive:1 precondition:1 rescaled:1 mentioned:2 intuition:1 transforming:1 covariates:1 solving:3 ali:1 basis:1 preconditioning:2 drineas:1 accelerate:1 muthukrishnan:1 derivation:1 fast:11 sympos:1 sarl:1 saunders:2 whose:1 richer:1 larger:2 solve:3 widely:1 valued:1 dominating:2 statistic:2 transform:17 itself:1 ip:1 final:1 rr:31 eigenvalue:1 took:2 propose:2 product:1 relevant:1 hadamard:16 combining:1 uci:1 academy:1 kh:27 ky:4 getting:1 convergence:1 rademacher:2 ben:1 depending:2 stating:1 nearest:1 school:1 sa:1 implemented:1 involves:1 come:3 implies:1 direction:1 liberty:1 enable:1 ungar:2 suffices:3 secondly:3 underdetermined:1 hold:1 inflation:3 normal:1 k2h:3 bch:1 estimation:1 proc:2 healthy:1 tool:1 gaussian:1 always:2 shrinkage:1 corollary:2 focus:1 improvement:1 rank:8 helpful:1 dependent:1 rhd:1 transformed:2 arg:2 dual:14 classification:3 yoel:1 wharton:2 construct:2 once:1 sampling:4 look:1 icml:1 np:10 report:4 few:1 randomly:4 inflates:1 national:1 individual:2 shkolnisky:1 subsampled:8 replacement:1 n1:1 ab:7 interest:1 huge:4 highly:1 multiply:2 joel:2 mahoney:2 pc:4 behind:1 accurate:1 arthur:1 penalizes:1 rotating:1 theoretical:2 minimal:1 instance:1 column:6 earlier:4 ordinary:3 cost:27 uniform:2 predictor:1 reported:1 marron:1 synthetic:6 recht:1 international:1 randomized:37 siam:3 probabilistic:1 michael:3 again:1 squared:1 containing:1 choose:2 conf:1 tam:1 ek:1 american:1 account:1 depends:1 performed:1 break:1 view:1 later:1 characterizes:1 red:2 francis:1 parallel:4 square:5 accuracy:8 variance:4 kaufmann:1 efficiently:2 multiplying:1 worth:2 epk:3 straight:1 randomness:1 edo:1 homoskedastic:2 definition:1 against:2 boutsidis:1 frequency:1 obvious:1 johnsonlindenstrauss:1 proof:6 petros:1 hsu:1 dataset:7 popular:1 carefully:1 response:2 improved:3 shrink:2 strongly:1 just:5 smola:1 preconditioner:1 working:1 hand:1 tropp:3 replacing:1 logistic:1 scientific:1 believe:2 normalized:1 true:21 tamas:2 hence:2 regularization:1 symmetric:2 dhillon:2 paramveer:2 illustrated:1 drr:31 please:1 smear:1 ridge:17 complete:1 recently:3 ols:2 common:3 rotation:1 association:1 martinsson:2 tail:1 significant:2 isabelle:1 consistency:2 mathematics:1 hp:7 language:1 longer:1 tygert:4 multivariate:1 inequality:1 binary:1 seen:2 minimum:2 morgan:1 ud:2 dashed:2 signal:3 full:1 desirable:1 sham:2 rahimi:1 technical:1 faster:6 calculation:1 cross:1 bach:1 y:1 prediction:2 regression:23 expectation:1 iteration:2 kernel:13 achieved:1 addition:2 median:8 thing:1 call:1 integer:2 leverage:1 ideal:1 split:1 enough:2 concerned:1 variety:2 gave:2 pennsylvania:1 reduce:2 idea:2 inner:1 bottleneck:2 whether:1 pca:42 accelerating:1 remark:3 matlab:4 detailed:1 involve:1 eigenvectors:1 maybe:1 amount:1 per:1 kill:2 gammerman:1 key:1 four:1 gunnar:1 achieving:1 boxplots:2 sum:1 run:2 noticing:1 family:1 guyon:1 missed:1 p3:1 appendix:1 accelerates:1 bound:6 followed:1 alex:2 fourier:2 speed:4 nathan:1 min:2 extremely:2 structured:1 ailon:2 smaller:2 across:1 gittens:1 kakade:2 rev:1 making:1 quoc:1 projecting:1 computationally:2 equation:5 discus:1 know:2 letting:1 mind:1 merit:1 zk2:5 generic:1 spectral:1 shrt:1 alternative:1 rp:2 original:3 top:3 running:2 subsampling:4 build:1 approximating:1 society:1 already:1 traditional:1 diagonal:3 loglinear:1 september:1 sci:1 trivial:1 reason:1 assuming:1 code:1 ratio:2 vladimir:2 setup:7 stoc:1 design:13 implementation:1 perform:1 av:1 observation:5 datasets:2 benchmark:1 beat:2 flop:11 varied:1 omission:1 arbitrary:2 sharp:1 introduced:2 namely:1 unduly:1 nip:1 below:1 built:1 including:1 pcr:9 green:2 power:4 xiangrui:1 natural:1 scheme:3 carried:1 philadelphia:1 nir:2 mvn:1 review:1 nice:1 multiplication:4 relative:7 loss:3 srft:1 age:1 validation:1 h2:1 foundation:1 foster:2 row:1 cancer:1 penalized:1 jung:1 keeping:2 bias:1 fall:1 comprehensively:1 neighbor:1 dimension:7 calculated:1 world:4 rich:1 forward:1 san:1 far:1 approximate:3 keep:1 active:1 francisco:1 iterative:1 ca:1 du:2 warranted:1 mse:4 expansion:1 constructing:1 domain:1 did:1 dense:3 main:2 fastfood:1 big:2 subsample:2 noise:3 n2:12 repeated:1 slow:4 tong:1 christos:1 xh:9 comput:1 candidate:1 down:2 theorem:4 annu:1 dominates:2 corr:7 ci:1 conditioned:1 halko:2 yichao:1 highlighting:1 applies:1 truth:1 conditional:1 viewed:1 towards:2 change:1 hard:1 determined:1 uniformly:1 vovk:1 miss:1 lemma:6 principal:5 called:1 total:1 bernard:1 experimental:2 svd:5 rokhlin:2 mark:4 yichaolu:1 lu1:1 infomration:1 |
4,539 | 5,107 | Sequential Transfer in Multi-armed Bandit
with Finite Set of Models
Mohammad Gheshlaghi Azar ?
Alessandro Lazaric ?
School of Computer Science
INRIA Lille - Nord Europe
CMU
Team SequeL
Emma Brunskill ?
School of Computer Science
CMU
Abstract
Learning from prior tasks and transferring that experience to improve future performance is critical for building lifelong learning agents. Although results in supervised and reinforcement learning show that transfer may significantly improve
the learning performance, most of the literature on transfer is focused on batch
learning tasks. In this paper we study the problem of sequential transfer in online
learning, notably in the multi?armed bandit framework, where the objective is to
minimize the total regret over a sequence of tasks by transferring knowledge from
prior tasks. We introduce a novel bandit algorithm based on a method-of-moments
approach for estimating the possible tasks and derive regret bounds for it.
1
Introduction
Learning from prior tasks and transferring that experience to improve future performance is a key
aspect of intelligence, and is critical for building lifelong learning agents. Recently, multi-task
and transfer learning received much attention in the supervised and reinforcement learning (RL)
setting with both empirical and theoretical encouraging results (see recent surveys by Pan and Yang,
2010; Lazaric, 2011). Most of these works focused on scenarios where the tasks are batch learning
problems, in which a training set is directly provided to the learner. On the other hand, the online
learning setting (Cesa-Bianchi and Lugosi, 2006), where the learner is presented with samples in
a sequential fashion, has been rarely considered (see Mann and Choe (2012); Taylor (2009) for
examples in RL and Sec. E of Azar et al. (2013) for a discussion on related settings).
The multi?armed bandit (MAB) (Robbins, 1952) is a simple yet powerful framework formalizing
the online learning with partial feedback problem, which encompasses a large number of applications, such as clinical trials, web advertisements and adaptive routing. In this paper we take a step
towards understanding and providing formal bounds on transfer in stochastic MABs. We focus on a
sequential transfer scenario where an (online) learner is acting in a series of tasks drawn from a stationary distribution over a finite set of MABs. The learning problem, within each task, can be seen
as a standard MAB problem with a fixed number of steps. Prior to learning, the model parameters
of each bandit problem are not known to the learner, nor does it know the distribution probability
over the bandit problems. Also, we assume that the learner is not provided with the identity of the
tasks throughout the learning. To act efficiently in this setting, it is crucial to define a mechanism
for transferring knowledge across tasks. In fact, the learner may encounter the same bandit problem over and over throughout the learning, and an efficient algorithm should be able to leverage
the knowledge obtained in previous tasks, when it is presented with the same problem again. To
address this problem one can transfer the estimates of all the possible models from prior tasks to
the current one. Once these models are accurately estimated, we show that an extension of the UCB
algorithm (Auer et al., 2002) is able to efficiently exploit this prior knowledge and reduce the regret
through tasks (Sec. 3).
?
?
{mazar,ebrun}@cs.cmu.edu
[email protected]
1
The main contributions of this paper are two-fold: (i) we introduce the tUCB algorithm which transfers the model estimates across the tasks and uses this knowledge to achieve a better performance
than UCB. We also prove that the new algorithm is guaranteed to perform as well as UCB in early
episodes, thus avoiding any negative transfer effect, and then to approach the performance of the
ideal case when the models are all known in advance (Sec. 4.4). (ii) To estimate the models we rely
on a new variant of method of moments, robust tensor power method (RTP) (Anandkumar et al.,
2013, 2012b) and extend it to the multi-task bandit setting1 :we prove that RTP provides a consistent
estimate of the means of all arms (for all models) as long as they are pulled at least three times
per task and prove sample complexity bounds for it (Sec. 4.2). Finally, we report some preliminary
results on synthetic data confirming the theoretical findings (Sec. 5). An extended version of this
paper containing proofs and additional comments is available in (Azar et al., 2013).
2
Preliminaries
We consider a stochastic MAB problem defined by a set of arms A = {1, . . . , K}, where each arm
i 2 A is characterized by a distribution ?i and the samples (rewards) observed from each arm are
independent and identically distributed. We focus on the setting where there exists a set of models
? = {? = (?1 , . . . , ?K )}, |?| = m, which contains all the possible bandit problems. We denote the
mean of an arm i, the best arm, and the best value of a model ? 2 ? respectively by ?i (?), i? (?),
?? (?). We define the arm gap of an arm i for a model ? as i (?) = ?? (?) ?i (?), while the model
gap for an arm i between two models ? and ?0 is defined as i (?, ?0 ) = |?i (?) ?i (?0 )|. We also
assume that arm rewards are bounded in [0, 1]. We consider the sequential transfer setting where at
each episode j the learner interacts with a task ??j , drawn from a distribution ? over ?, for n steps.
The objective is to minimize the (pseudo-)regret RJ over J episodes measured as the difference
between the rewards obtained by pulling i? (??j ) and those achieved by the learner:
RJ =
J
X
j=1
Rjn =
J X
X
j
Ti,n
?j ),
i (?
(1)
j=1 i6=i?
j
where Ti,n
is the number of pulls to arm i after n steps of episode j. We also introduce some
tensor notation. Let X 2 RK be a random realization of the rewards of all arms from a ran? = ?(?),
dom model. All the realizations are i.i.d. conditional on a model ?? and E[X|? = ?]
where the i-th component of ?(?) 2 RK is [?(?)]i = ?i (?). Given realizations X 1 , X 2 and
X 3 , we define the second moment matrix M2 = E[X 1 ? X 2 ] such that [M 2 ]i,j = E[X 1i X 2j ] and
the third moment tensor M3 = E[X 1 ? X 2 ? X 3 ] such that [M2 ]i,j,l = E[X 1i X 2j X 3l ]. Since
the realizations are conditionally independent, we have that, for every ? 2 ?, E[X 1 ? X 2 |?] =
2
E[X 1 |?]
= ?(?) ?P
?(?) and this allows us to rewrite the second and third moments as
P? E[X |?] ?2
M2 = ? ?(?)?(?) , M3 = ? ?(?)?(?)?3 , where v ?p = v ? v ? ? ? ? v is the p-th tensor power.
Let A be a 3rd order member of the tensor product of the Euclidean space RK (as M3 ), then we define the multilinear map as follows. For a set of three matrices {Vi 2 RK?m }1?i?3 , the (i1 , i2 , i3 )
m?m?m
entry
is [A(V1 , V2 , V3 )]i1 ,i2 ,i3 :=
P in the 3-way array representation of A(V1 , V2 , V3 ) 2 R
A
[V
]
[V
]
[V
]
.
We
also
use
different
norms: the Euclidean norm
j
,j
,j
1
j
,i
2
j
,i
3
j
,i
1 2 3
1 1
2 2
3 3
1?j1 ,j2 ,j3 ?n
k ? k; the Frobenius norm k ? kF ; the matrix max-norm kAkmax = maxij |[A]ij |.
3
Multi-arm Bandit with Finite Models
Before considering the transfer problem, we Require: Set of models ?, number of steps n
show that a simple variation to UCB allows us
for t = 1, . . . , n do
to effectively exploit the knowledge of ? and
Build ?t = {? : 8i, |?i (?) ?
?i,t | ? "i,t }
obtain a significant reduction in the regret. The
Select ?t = arg max?2?t ?? (?)
mUCB (model-UCB) algorithm in Fig. 1 takes
Pull arm It = i? (?t )
Observe sample xIt and update
as input a set of models ? including the current
? At each step t, the algoend for
(unknown) model ?.
rithm computes a subset ?t ? ? containing
Figure 1: The mUCB algorithm.
only the models whose means ?i (?) are com? of the current model, obtained averaging
patible with the current estimates ?
?i,t of the means ?i (?)
1
Notice that estimating the models involves solving a latent variable model estimation problem, for which
RTP is the state-of-the-art.
2
Ti,t pulls, and their uncertainty "i,t (see Eq. 2 for an explicit definition of this term). Notice that it
is enough that one arm does not satisfy the compatibility condition to discard a model ?. Among
all the models in ?t , mUCB first selects the model with the largest optimal value and then it pulls
its corresponding optimal arm. This choice is coherent with the optimism in the face of uncertainty
principle used in UCB-based algorithms, since mUCB always pulls the optimal arm corresponding
to the optimistic model compatible with the current estimates ?
?i,t . We show that mUCB incurs a
regret which is never worse than UCB and it is often significantly smaller.
We denote the set of arms which are optimal for at least a model in a set ?0 as A? (?0 ) = {i 2 A :
9? 2 ?0 : i? (?) = i}. The set of models for which the arms in A0 are optimal is ?(A0 ) = {? 2 ? :
9i 2 A0 : i? (?) = i}. The set of optimistic models for a given model ?? is ?+ = {? 2 ? : ?? (?)
? and their corresponding optimal arms A+ = A? (?+ ). The following theorem bounds the
?? (?)},
expected regret (similar bounds hold in high probability). The lemmas and proofs (using standard
tools from the bandit literature) are available in Sec. B of Azar et al. (2013).
Theorem 1. If mUCB is run with = 1/n, a set of m models ? such that the ?? 2 ? and
q
"i,t = log(mn2 / )/(2Ti,t 1 ),
(2)
where Ti,t
1
is the number of pulls to arm i at the beginning of step t, then its expected regret is
E[Rn ] ? K +
X
i2A+
? log mn3
X
2 i (?)
2 log mn3
?
K
+
?2
? ,
i2A+ min?2?
min?2?+,i i (?, ?)
i (?, ?)
+,i
(3)
where A+ = A? (?+ ) is the set of arms which are optimal for at least one optimistic model ?+ and
?+,i = {? 2 ?+ : i? (?) = i} is the set of optimistic models for which i is the optimal arm.
Remark (comparison to UCB). The UCB algorithm incurs a regret
?X
?
log n ?
log n ?
E[Rn (UCB)] ? O
?
O
K
?
? .
i2A
mini i (?)
i (?)
We see that mUCB displays two major improvements. The regret in Eq. 3 can be written as
?X
?
?
?
log n
log n
E[Rn (mUCB)] ? O
?
O
|A
|
+
?
? .
i2A+ min?2?
mini min?2?
i (?, ?)
i (?, ?)
+,i
+,i
This result suggests that mUCB tends to discard all the models in ?+ from the most optimistic
down to the actual model ?? which, with high-probability, is never discarded. As a result, even if
other models are still in ?t , the optimal arm of ?? is pulled until the end. This significantly reduces
the set of arms which are actually pulled by mUCB and the previous bound only depends on the
number of arms in A+ , which is |A+ | ? |A? (?)| ? K. Furthermore, for all arms i, the minimum
? is guaranteed to be larger than the arm gap i (?)
? (see Lem. 4 in Sec. B
gap min?2?+,i i (?, ?)
of Azar et al. (2013)), thus further improving the performance of mUCB w.r.t. UCB.
4
Online Transfer with Unknown Models
We now consider the case when the set of models is unknown and the regret is cumulated over
multiple tasks drawn from ? (Eq. 1). We introduce tUCB (transfer-UCB) which transfers estimates
of ?, whose accuracy is improved through episodes using a method-of-moments approach.
4.1
The transfer-UCB Bandit Algorithm
Fig. 2 outlines the structure of our online transfer bandit algorithm tUCB (transfer-UCB). The algorithm uses two sub-algorithms, the bandit algorithm umUCB (uncertain model-UCB), whose objective is to minimize the regret at each episode, and RTP (robust tensor power method) which at
each episode j computes an estimate {?
?ji (?)} of the arm means of all the models. The bandit algorithm umUCB in Fig. 3 is an extension of the mUCB algorithm. It first computes a set of models
?jt whose means ?
?i (?) are compatible with the current estimates ?
?i,t . However, unlike the case
where the exact models are available, here the models themselves are estimated and the uncertainty
"j in their means (provided as input to umUCB) is taken into account in the definition of ?jt . Once
3
Require: number of arms K, number of
models m, constant C(?).
Initialize estimated models ?1 =
{?
?1i (?)}i,? , samples R 2 RJ?K?n
for j = 1, 2, . . . , J do
Run Rj = umUCB(?j , n)
Run ?j+1 = RTP(R, m, K, j, )
end for
Require: set of models ?j , num. steps n
Pull each arm three times
for t = 3K + 1, . . . , n do
Build ?jt = {? : 8i, |?
?ji (?) ?
?i,t | ? "i,t + "j }
j
j
Compute Bt (i; ?) = min (?
?i (?) + "j ), (?
?i,t + "i,t )
j
Compute ?t = arg max?2?j maxi Btj (i; ?)
t
Pull arm It = arg maxi Btj (i; ?tj )
Observe sample R(It , Ti,t ) = xIt and update
end for
return Samples R
Figure 2: The tUCB algorithm.
Figure 3: The umUCB algorithm.
Require: samples R 2 Rj?n , number of models m and arms K, episode j
c2 and M
c3 using the reward samples from R (Eq. 4)
Estimate the second and third moment M
m?m
K?m
b
b
c2 resp.)
Compute D 2 R
and U 2 R
(m largest eigenvalues and eigenvectors of M
1/2
c
b
b
b
c
c
c
c
Compute the whitening mapping W = U D
and the tensor T = M3 (W , W , W )
Plug Tb in Alg. 1 of Anandkumar et al. (2012b) and compute eigen-vectors/values {b
v (?)}, {b(?)}
j
T +
b
c
Compute ?
b (?) = (?)(W ) vb(?) for all ? 2 ?
return ?j+1 = {b
?j (?) : ? 2 ?}
Figure 4: The robust tensor power (RTP) method (Anandkumar et al., 2012b).
the active set is computed, the algorithm computes an upper-confidence bound on the value of each
arm i for each model ? and returns the best arm for the most optimistic model. Unlike in mUCB,
due to the uncertainty over the model estimates, a model ? might have more than one optimal arm,
and an upper-confidence bound on the mean of the arms ?
?i (?) + "j is used together with the upperconfidence bound ?
?i,t + "i,t , which is directly derived from the samples observed so far from arm
i. This guarantees that the B-values are always consistent with the samples generated from the actual model ??j . Once umUCB terminates, RTP (Fig. 4) updates the estimates of the model means
?
bj (?) = {?
?ji (?)}i 2 RK using the samples obtained from each arm i. At the beginning of each task
umUCB pulls all the arms 3 times, since RTP needs at least 3 samples from each arm to accurately
estimate the 2nd and 3rd moments (Anandkumar et al., 2012b). More precisely, RTP uses all the
reward samples generated up to episode j to estimate the 2nd and 3rd moments (see Sec. 2) as
c2 = j
M
1
Xj
l=1
?1l ? ?2l ,
c3 = j
M
and
1
Xj
l=1
?1l ? ?2l ? ?3l ,
(4)
l
where the vectors ?1l , ?2l , ?3l 2 RK are obtained by dividing the Ti,n
samples observed from arm
l
i in episode l in three batches and taking their average (e.g., [?1l ]i is the average of the first Ti,n
/3
2
l
c2 and M
c3 are consistent estisamples). Since ?1l , ?2l , ?3l are independent estimates of ?(?? ), M
mates of the second and third moments M2 and M3 . RTP relies on the fact that the model means
?(?) can be recovered from the spectral decomposition of the symmetric tensor T = M3 (W, W, W ),
where W is a whitening matrix for M2 , i.e., M2 (W, W ) = Im?m (see Sec. 2 for the definition of the mapping A(V1 , V2 , V3 )). Anandkumar et al. (2012b) (Thm. 4.3) have shown that under some mild assumption (see later Assumption 1) the model means {?(?)}, can be obtained as
?(?) = (?)Bv(?), where ( (?), v(?)) is a pair of eigenvector/eigenvalue for the tensor T and
B := (W T )+ .Thus the RTP algorithm estimates the eigenvectors vb(?) and the eigenvalues b(?), of
c3 (W
c, W
c, W
c ).3 Once vb(?) and b(?) are computed, the estimated
the m ? m ? m tensor Tb := M
j
b v (?), where B
b is the
mean vector ?
b (?) is obtained by the inverse transformation ?
bj (?) = b(?)Bb
T
c (for a detailed description of RTP algorithm see Anandkumar et al., 2012b).
pseudo inverse of W
2
Notice that 1/3([?1l ]i + [?2l ]i + [?1l ]i ) = ?
?li,n , the empirical mean of arm i at the end of episode l.
K?m
c
c
c
c ) = Im?m , i.e., W
c is the whitening matrix of M
c2 . In
The matrix W 2 R
is such that M2 (W , W
1/2
m?m
c
c
b
b
b
general W is not unique. Here, we choose W = U D
, where D 2 R
is a diagonal matrix consisting
c2 and U
b 2 RK?m has the corresponding eigenvectors as its columns.
of the m largest eigenvalues of M
3
4
4.2
Sample Complexity of the Robust Tensor Power Method
umUCB requires as input "j , i.e., the uncertainty of the model estimates. Therefore we need sample complexity bounds on the accuracy of {?
?i (?)} computed by RTP. The performance of RTP is
c2 and M
c3 w.r.t. the true moments. In Thm. 2 we
directly affected by the error of the estimates M
p
prove that, as the number of tasks j grows, this error rapidly decreases with the rate of 1/j. This
result provides us with an upper-bound on the error "j needed for building the confidence intervals
in umUCB. The following definition and assumption are required for our result.
Definition 1. Let ?M2 = { 1 , 2 , . . . , m } be the set of m largest eigenvalues of the matrix M2 .
Define min := min 2?M2 , max := max 2?M2 and max := max? (?). Define the minimum
gap between the distinct eigenvalues of M2 as
:= min i 6= l (| i
l |).
Assumption 1. The mean vectors {?(?)}? are linear independent and ?(?) > 0 for all ? 2 ?.
We now state our main result which is in the form of a high probability bound on the estimation
error of mean reward vector of every model ? 2 ?.
p
3
Theorem 2. Pick 2 (0, 1). Let C(?) := C3 max
+ 1/ min + 1/ max ),
max / min ( max /
where C3 > 0 is a universal constant. Then under Assumption 1 there exist constants C4 > 0 and a
permutation ? on ?, such that for all ? 2 ?, we have w.p. 1
q
)
C4 m5 K 6 log(K/ )
k?(?) ?
bj (?(?))k ? "j , C(?)K 2.5 m2 log(K/
after j
. (5)
2
j
min(
, )2 3
min
min
min
Remark (computation of C(?)). As illustrated in Fig. 3, umUCB relies on the estimates ?
bj (?) and
j
on their accuracy " . Although the bound reported in Thm. 2 provides an upper confidence bound
on the error of the estimates, it contains terms which are not computable in general (e.g., min ). In
practice, C(?) should be considered as a parameter of the algorithm.This is not dissimilar from the
parameter usually introduced in the definition of "i,t in front of the square-root term in UCB.
4.3
Regret Analysis of umUCB
We now analyze the regret of umUCB when an estimated set of models ?j is provided as input. At
episode j, for each model ? we define the set of non-dominated arms (i.e., potentially optimal arms)
as Aj? (?) = {i 2 A : @i0 , ?
?ji (?) + "j < ?
?ji0 (?) "j }. Among the non-dominated arms, when the
j
actual model is ?? , the set of optimistic arms is Aj+ (?; ??j ) = {i 2 Aj? (?) : ?
?ji (?) + "j
?? (??j )}.
As a result, the set of optimistic models is ?j+ (??j ) = {? 2 ? : Aj+ (?; ??j ) 6= ;}. In some cases,
because of the uncertainty in the model estimates, unlike in mUCB, not all the models ? 6= ??j can be
discarded, not even at the end of a very long episode. Among the optimistic models, the set of models
e j (??j ) = {? 2 ?j (??j ) : 8i 2 Aj (?; ??j ), |?
that cannot be discarded is defined as ?
?ji (?) ?i (??j )| ?
+
+
+
j
" }. Finally, when we want to apply the previous definitions to a set of models ?0 instead of single
S
model we have, e.g., Aj? (?0 ; ??j ) = ?2?0 Aj? (?; ??j ).
The proof of the following results are available in Sec. D of Azar et al. (2013), here we only report
the number of pulls, and the corresponding regret bound.
Corollary 1. If at episode j umUCB is run with "i,t as in Eq. 2 and "j as in Eq. 5 with a parameter
0
= /2K, then for any arm i 2 A, i 6= i? (??j ) is pulled Ti,n times such that
?
8
2 log 2mKn2 /
>
>
T
?
min
i,n
>
>
?j 2
<
i (? )
>
>
Ti,n ? 2 log 2mKn2 / /(
>
>
:
Ti,n = 0
,
log 2mKn2 /
b i (?; ??j )2
2 min
j
?j
?2?i,+ (? )
?j 2
i (? ) ) + 1
+1
if i 2 Aj1
if i 2 Aj2
otherwise
w.p. 1
, where ?ji,+ (??j ) = {? 2 ?j+ (??j ) : i 2 A+ (?; ??j )} is the set of models for which i is
among their optimistic non-dominated arms, b i (?; ??j ) = i (?, ??j )/2 "j , Aj1 = Aj+ (?j+ (??j ); ??j )
e j (??j ); ??j ) (i.e., set of arms only proposed by models that can be discarded), and Aj =
Aj+ (?
+
2
e j (??j ); ??j ) (i.e., set of arms only proposed by models that cannot be discarded).
Aj+ (?
+
5
The previous corollary states that arms which cannot be optimal for any optimistic model (i.e.,
the optimistic non-dominated arms) are never pulled by umUCB, which focuses only on arms in
i 2 Aj (?j (??j ); ??j ). Among these arms, those that may help to remove a model from the active set
+
+
(i.e., i 2 Aj1 ) are potentially pulled less than UCB, while the remaining arms, which are optimal for
the models that cannot be discarded (i.e., i 2 Aj2 ), are simply pulled according to a UCB strategy.
Similar to mUCB, umUCB first pulls the arms that are more optimistic until either the active set ?jt
changes or they are no longer optimistic (because of the evidence from the actual samples). We are
now ready to derive the per-episode regret of umUCB.
Theorem 3. If umUCB is run for n steps on the set of models ?j estimated by RTP after j episodes
with = 1/n, and the actual model is ??j , then its expected regret (w.r.t. the random realization in
episode j and conditional on ??j ) is
E[Rjn ] ? K +
X
+
X
j
i2A1
j
i2A2
?
log 2mKn3 min 2/
2 log 2mKn3 /
?j )2 , 1/ 2 min
j
?2?
i (?
?j )
i,+ (?
?j ).
i (?
b i (?; ??j )2
?j )
i (?
Remark (negative transfer). The transfer of knowledge introduces a bias in the learning process
which is often beneficial. Nonetheless, in many cases transfer may result in a bias towards wrong
solutions and a worse learning performance, a phenomenon often referred to as negative transfer.
The first interesting aspect of the previous theorem is that umUCB is guaranteed to never perform
worse than UCB itself. This implies that tUCB never suffers from negative transfer, even when the
set ?j contains highly uncertain models and might bias umUCB to pull suboptimal arms.
Remark (improvement over UCB). In Sec. 3 we showed that mUCB exploits the knowledge of ?
to focus on a restricted set of arms which are pulled less than UCB. In umUCB this improvement is
not as clear, since the models in ? are not known but are estimated online through episodes. Yet,
similar to mUCB, umUCB has the two main sources of potential improvement w.r.t. to UCB. As
illustrated by the regret bound in Thm. 3, umUCB focuses on arms in Aj1 [ Aj2 which is potentially
a smaller set than A. Furthermore, the number of pulls to arms in Aj1 is smaller than for UCB
whenever the estimated model gap b i (?; ??j ) is bigger than i (??j ). Eventually, umUCB reaches
the same performance (and improvement over UCB) as mUCB when j is big enough. In fact, the
set of optimistic models reduces to the one used in mUCB (i.e., ?j+ (??j ) ? ?+ (??j )) and all the
optimistic models have only optimal arms (i.e., for any ? 2 ?+ the set of non-dominated optimistic
arms is A+ (?; ??j ) = {i? (?)}), which corresponds to Aj1 ? A? (?+ (??j )) and Aj2 ? {i? (??j )}, which
matches the condition of mUCB. For instance, for any model ?, in order to have A? (?) = {i? (?)},
for any arm i 6= i? (?) we need that ?
?ji (?) + "j ? ?
?ji? (?) (?) "j . Thus after
2C(?)
min min mini
j
?
?
?2?
?2?+ (?)
i (?)
2
+ 1.
episodes, all the optimistic models have only one optimal arm independently from the actual identity
of the model ??j . Although this condition may seem restrictive, in practice umUCB starts improving
over UCB much earlier, as illustrated in the numerical simulation in Sec. 5.
4.4
Regret Analysis of tUCB
Given the previous results, we derive the bound on the cumulative regret over J episodes (Eq. 1).
Theorem 4. If tUCB is run over J episodes of n steps in which the tasks ??j are drawn from a fixed
distribution ? over a set of models ?, then its cumulative regret is
RJ ? JK +
+
w.p. 1
XJ
j=1
XJ
j=1
X
X
j
i2A2
j
i2A1
min
?
2 log 2mKn2 /
?j 2
i (? )
2
2 log 2mKn /
?j
i (? )
,
,
log 2mKn2 /
b j (?; ??j )2
2 min
j
?j
?2?i,+ (? )
?j )
i (?
i
w.r.t. the randomization over tasks and the realizations of the arms in each episode.
6
30
UCB
UCB+
mUCB
tUCB
1
25
0.9
0.8
Complexity
20
0.7
Reward
0.6
0.5
15
10
0.4
0.3
5
0.2
0.1
0
0
0
m1 m2 m3 m4 m5
m1 m2 m3 m4 m5
m1 m2 m3 m4 m5
m1 m2 m3 m4 m5
m1 m2 m3 m4 m5
m1 m2 m3 m4 m5
1000
2000
3000
4000
5000
Number of Tasks (J)
m1 m2 m3 m4 m5
Models
Figure 5: Set of models ?.
Figure 6: Complexity over tasks.
350
Regret
250
UCB
UCB+
mUCB
tUCB (J=1000)
tUCB (J=2000)
tUCB (J=5000)
UCB
UCB+
mUCB
tUCB
350
Per?episode Regret
300
200
150
300
250
200
150
100
100
50
0
5000
10000
15000
0
Number of Steps (n)
1000
2000
3000
4000
5000
Number of Tasks (J)
Figure 7: Regret of UCB, UCB+, mUCB, and
tUCB (avg. over episodes) vs episode length.
Figure 8: Per-episode regret of tUCB.
This result immediately follows from Thm. 3 and it shows a linear dependency on the number of
episodes J. This dependency is the price to pay for not knowing the identity of the current task ??j .
If the task was revealed at the beginning of the task, a bandit algorithm could simply cluster all the
samples coming from the same task and incur a much smaller cumulative regret with a logarithmic
dependency on episodes and steps, i.e., log(nJ). Nonetheless, as discussed in the previous section,
the cumulative regret of tUCB is never worse than for UCB and as the number of tasks increases it
approaches the performance of mUCB, which fully exploits the prior knowledge of ?.
5
Numerical Simulations
In this section we report preliminary results of tUCB on synthetic data. The objective is to illustrate
and support the previous theoretical findings. We define a set ? of m = 5 MAB problems with
K = 7 arms each, whose means {?i (?)}i,? are reported in Fig. 5 (see Sect. F in Azar et al. (2013)
for the actual values), where each model has a different color and squares correspond to optimal arms
(e.g., arm 2 is optimal for model ?2 ). This set of models is chosen to be challenging and illustrate
some interesting cases useful to understand the functioning of the algorithm.4 Models ?1 and ?2
only differ in their optimal arms and this makes it difficult to distinguish them. For arm 3 (which is
optimal for model ?3 and thus potentially selected by mUCB), all the models share exactly the same
mean value. This implies that no model can be discarded by pulling it. Although this might suggest
that mUCB gets stuck in pulling arm 3, we showed in Thm. 1 that this is not the case. Models ?1
and ?5 are challenging for UCB since they have small minimum gap. Only 5 out of the 7 arms are
actually optimal for a model in ?. Thus, we also report the performance of UCB+ which, under the
assumption that ? is known, immediately discards all the arms which are not optimal (i 2
/ A? ) and
performs UCB on the remaining arms. The model distribution is uniform, i.e., ?(?) = 1/m.
Before discussing the transfer results, we compare UCB, UCB+, and mUCB, to illustrate the advantage of the prior knowledge of ? w.r.t. UCB. Fig. 7 reports the per-episode regret of the three
4
Notice that although ? satisfies Assumption 1, the smallest singular value
0.0038, thus making the estimation of the models difficult.
7
min
= 0.0039 and
=
algorithms for episodes of different length n (the performance of tUCB is discussed later). The results are averaged over all the models in ? and over 200 runs each. All the algorithms use the same
confidence bound "i,t . The performance of mUCB is significantly better than both UCB, and UCB+,
thus showing that mUCB makes an efficient use of the prior of knowledge of ?. Furthermore, in
Fig. 6 the horizontal lines correspond to the value of the regret bounds up to the n dependent terms
and constants5 for the different models in ? averaged w.r.t. ? for the three algorithms (the actual
values for the different models are in the supplementary material). These values show that the improvement observed in practice is accurately predicated by the upper-bounds derived in Thm. 1.
We now move to analyze the performance of tUCB. In Fig. 8 we show how the per-episode regret
changes through episodes for a transfer problem with J = 5000 tasks of length n = 5000. In
tUCB we used "j as in Eq. 5 with C(?) = 2. As discussed in Thm. 3, UCB and mUCB define
the boundaries of the performance of tUCB. In fact, at the beginning tUCB selects arms according
to a UCB strategy, since no prior information about the models ? is available. On the other hand,
as more tasks are observed, tUCB is able to transfer the knowledge acquired through episodes and
build an increasingly accurate estimate of the models, thus approaching the behavior of mUCB. This
is also confirmed by Fig. 6 where we show how the complexity of tUCB changes through episodes.
In both cases (regret and complexity) we see that tUCB does not reach the same performance of
mUCB. This is due to the fact that some models have relatively small gaps and thus the number of
episodes to have an accurate enough estimate of the models to reach the performance of mUCB is
much larger than 5000 (see also the Remarks of Thm. 3). Since the final objective is to achieve a
small global regret (Eq. 1), in Fig. 7 we report the cumulative regret averaged over the total number
of tasks (J) for different values of J and n. Again, this graph shows that tUCB outperforms UCB
and that it tends to approach the performance of mUCB as J increases, for any value of n.
6
Conclusions and Open Questions
In this paper we introduce the transfer problem in the multi?armed bandit framework when a tasks
are drawn from a finite set of bandit problems. We first introduced the bandit algorithm mUCB
and we showed that it is able to leverage the prior knowledge on the set of bandit problems ? and
reduce the regret w.r.t. UCB. When the set of models is unknown we define a method-of-moments
variant (RTP) which consistently estimates the means of the models in ? from the samples collected
through episodes. This knowledge is then transferred to umUCB which performs no worse than UCB
and tends to approach the performance of mUCB. For these algorithms we derive regret bounds, and
we show preliminary numerical simulations. To the best of our knowledge, this is the first work
studying the problem of transfer in multi-armed bandit. It opens a series of interesting directions,
including whether explicit model identification can improve our transfer regret.
Optimality of tUCB. At each episode, tUCB transfers the knowledge about ? acquired from previous
tasks to achieve a small per-episode regret using umUCB. Although this strategy guarantees that the
per-episode regret of tUCB is never worse than UCB, it may not be the optimal strategy in terms of
the cumulative regret through episodes. In fact, if J is large, it could be preferable to run a model
identification algorithm instead of umUCB in earlier episodes so as to improve the quality of the
estimates ?
?i (?). Although such an algorithm would incur a much larger regret in earlier tasks (up
to linear), it could approach the performance of mUCB in later episodes much faster than done by
tUCB. This trade-off between identification of the models and transfer of knowledge may suggest
that different algorithms than tUCB are possible.
Unknown model-set size. In some problems the size of model set m is not known to the learner and
needs to be estimated. This problem can be addressed by estimating the rank of matrix M2 which
equals to m (Kleibergen and Paap, 2006). We also note that one can relax the assumption that ?(?)
needs to be positive (see Assumption 1) by using the estimated model size as opposed to m, since
M2 depends not on the means of models with ?(?) = 0.
Acknowledgments. This research was supported by the National Science Foundation (NSF award #SBE0836012). We would like to thank Sham Kakade and Animashree Anandkumar for valuable discussions. A.
Lazaric would like to acknowledge the support of the Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and FEDER through the? Contrat de Projets Etat Region (CPER) 2007-2013?, and
the European Community?s Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495
(project CompLACS).
P
5
For instance, for UCB we compute i 1/ i .
8
References
Agarwal, A., Dud?k, M., Kale, S., Langford, J., and Schapire, R. E. (2012). Contextual bandit learning with predictable rewards. In Proceedings of the 15th International Conference on Artificial
Intelligence and Statistics (AISTATS?12).
Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S., and Liu, Y.-K. (2012a). A spectral algorithm for
latent dirichlet allocation. In Proceedings of Advances in Neural Information Processing Systems
25 (NIPS?12), pages 926?934.
Anandkumar, A., Ge, R., Hsu, D., and Kakade, S. M. (2013). A tensor spectral approach to learning
mixed membership community models. Journal of Machine Learning Research, 1:65.
Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M., and Telgarsky, M. (2012b). Tensor decompositions
for learning latent variable models. CoRR, abs/1210.7559.
Anandkumar, A., Hsu, D., and Kakade, S. M. (2012c). A method of moments for mixture models
and hidden markov models. In Proceeding of the 25th Annual Conference on Learning Theory
(COLT?12), volume 23, pages 33.1?33.34.
Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite-time analysis of the multi-armed bandit
problem. Machine Learning, 47:235?256.
Azar, M. G., Lazaric, A., and Brunskill, E. (2013). Sequential transfer in multi-armed bandit with
finite set of models. CoRR, abs/1307.6887.
Cavallanti, G., Cesa-Bianchi, N., and Gentile, C. (2010). Linear algorithms for online multitask
classification. Journal of Machine Learning Research, 11:2901?2934.
Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University
Press.
Dekel, O., Long, P. M., and Singer, Y. (2006). Online multitask learning. In Proceedings of the 19th
Annual Conference on Learning Theory (COLT?06), pages 453?467.
Garivier, A. and Moulines, E. (2011). On upper-confidence bound policies for switching bandit
problems. In Proceedings of the 22nd international conference on Algorithmic learning theory,
ALT?11, pages 174?188, Berlin, Heidelberg. Springer-Verlag.
Kleibergen, F. and Paap, R. (2006). Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics, 133(1):97?126.
Langford, J. and Zhang, T. (2007). The epoch-greedy algorithm for multi-armed bandits with side
information. In Proceedings of Advances in Neural Information Processing Systems 20 (NIPS?07).
Lazaric, A. (2011). Transfer in reinforcement learning: a framework and a survey. In Wiering, M.
and van Otterlo, M., editors, Reinforcement Learning: State of the Art. Springer.
Lugosi, G., Papaspiliopoulos, O., and Stoltz, G. (2009). Online multi-task learning with hard constraints. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT?09).
Mann, T. A. and Choe, Y. (2012). Directed exploration in reinforcement learning with transferred knowledge. In Proceedings of the Tenth European Workshop on Reinforcement Learning
(EWRL?12).
Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge
and Data Engineering, 22(10):1345?1359.
Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the AMS,
58:527?535.
Saha, A., Rai, P., Daum? III, H., and Venkatasubramanian, S. (2011). Online learning of multiple
tasks and their relationships. In Proceedings of the 14th International Conference on Artificial
Intelligence and Statistics (AISTATS?11), Ft. Lauderdale, Florida.
Stewart, G. W. and Sun, J.-g. (1990). Matrix perturbation theory. Academic press.
Taylor, M. E. (2009). Transfer in Reinforcement Learning Domains. Springer-Verlag.
Wedin, P. (1972). Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12(1):99?111.
9
| 5107 |@word mild:1 trial:1 multitask:2 version:1 norm:4 nd:4 dekel:1 open:2 simulation:3 decomposition:4 pick:1 incurs:2 reduction:1 moment:13 liu:1 series:2 contains:3 venkatasubramanian:1 outperforms:1 current:7 com:1 recovered:1 contextual:1 yet:2 written:1 numerical:4 j1:1 confirming:1 remove:1 update:3 v:1 stationary:1 intelligence:3 selected:1 greedy:1 beginning:4 num:1 provides:3 zhang:1 c2:7 prove:4 emma:1 introduce:5 acquired:2 notably:1 expected:3 behavior:1 themselves:1 nor:1 multi:12 moulines:1 encouraging:1 armed:8 actual:8 considering:1 provided:4 estimating:3 bounded:1 formalizing:1 notation:1 project:1 eigenvector:1 finding:2 transformation:1 nj:1 guarantee:2 pseudo:2 every:2 act:1 ti:11 exactly:1 preferable:1 wrong:1 grant:1 before:2 positive:1 engineering:1 tends:3 switching:1 lugosi:3 inria:2 might:3 suggests:1 challenging:2 ji0:1 averaged:3 directed:1 unique:1 acknowledgment:1 practice:3 regret:40 empirical:2 universal:1 significantly:4 confidence:6 suggest:2 get:1 cannot:4 map:1 kale:1 attention:1 independently:1 focused:2 survey:3 immediately:2 m2:22 array:1 pull:13 variation:1 resp:1 exact:1 us:3 agreement:1 jk:1 econometrics:1 observed:5 ft:1 wiering:1 region:1 sun:1 episode:42 sect:1 decrease:1 trade:1 valuable:1 ran:1 alessandro:2 predictable:1 complexity:7 reward:9 dom:1 rewrite:1 solving:1 contrat:1 incur:2 learner:9 aj2:4 cper:1 distinct:1 mn2:1 artificial:2 whose:5 larger:3 supplementary:1 relax:1 otherwise:1 statistic:2 fischer:1 itself:1 final:1 online:11 sequence:1 eigenvalue:6 advantage:1 product:1 coming:1 fr:1 j2:1 realization:6 rapidly:1 achieve:3 description:1 frobenius:1 cluster:1 telgarsky:1 help:1 derive:4 illustrate:3 measured:1 ij:1 school:2 received:1 eq:9 dividing:1 c:1 involves:1 implies:2 differ:1 direction:1 stochastic:2 exploration:1 routing:1 material:1 mann:2 education:1 require:4 preliminary:4 mab:4 randomization:1 multilinear:1 im:2 extension:2 hold:1 considered:2 mapping:2 bj:4 algorithmic:1 major:1 early:1 smallest:1 estimation:3 calais:1 council:1 robbins:2 largest:4 tool:1 always:2 ewrl:1 i3:2 mn3:2 corollary:2 derived:2 focus:5 xit:2 improvement:6 consistently:1 rank:2 am:1 dependent:1 membership:1 i0:1 bt:1 transferring:4 a0:3 hidden:1 bandit:26 i1:2 selects:2 compatibility:1 arg:3 among:5 colt:3 classification:1 art:2 initialize:1 equal:1 once:4 never:7 choe:2 lille:1 future:2 report:6 saha:1 national:1 m4:7 consisting:1 ab:2 highly:1 introduces:1 mixture:1 wedin:1 tj:1 accurate:2 partial:1 experience:2 stoltz:1 taylor:2 euclidean:2 theoretical:3 uncertain:2 instance:2 column:1 earlier:3 stewart:1 entry:1 subset:1 uniform:1 seventh:1 front:1 reported:2 dependency:3 synthetic:2 international:3 sequel:1 off:1 lauderdale:1 complacs:1 together:1 again:2 cesa:4 containing:2 choose:1 opposed:1 worse:6 return:3 li:1 account:1 potential:1 de:1 sec:12 satisfy:1 vi:1 depends:2 later:3 root:1 optimistic:18 analyze:2 start:1 contribution:1 minimize:3 square:2 accuracy:3 efficiently:2 correspond:2 identification:3 accurately:3 etat:1 confirmed:1 reach:3 suffers:1 whenever:1 definition:7 nonetheless:2 proof:3 hsu:4 animashree:1 knowledge:19 color:1 auer:2 actually:2 higher:1 supervised:2 ebrun:1 improved:1 done:1 furthermore:3 predicated:1 until:2 langford:2 hand:2 horizontal:1 web:1 quality:1 aj:12 pulling:3 grows:1 building:3 effect:1 true:1 functioning:1 dud:1 symmetric:1 i2:2 illustrated:3 conditionally:1 game:1 m5:8 generalized:1 mabs:2 outline:1 mohammad:1 performs:2 btj:2 novel:1 recently:1 rl:2 ji:9 volume:1 extend:1 discussed:3 m1:7 significant:1 cambridge:1 rd:3 mathematics:1 i6:1 europe:1 longer:1 whitening:3 recent:1 showed:3 discard:3 scenario:2 aj1:6 verlag:2 discussing:1 seen:1 minimum:3 additional:1 ministry:1 gentile:1 v3:3 ii:1 multiple:2 rj:6 reduces:2 sham:1 match:1 characterized:1 plug:1 clinical:1 long:3 faster:1 academic:1 award:1 bigger:1 j3:1 variant:2 prediction:1 cmu:3 i2a:4 agarwal:1 achieved:1 want:1 interval:1 addressed:1 singular:3 source:1 crucial:1 unlike:3 regional:1 comment:1 member:1 seem:1 anandkumar:11 yang:2 leverage:2 ideal:1 revealed:1 identically:1 enough:3 iii:1 xj:4 approaching:1 suboptimal:1 reduce:2 knowing:1 computable:1 whether:1 optimism:1 feder:1 remark:5 useful:1 detailed:1 eigenvectors:3 clear:1 reduced:1 schapire:1 exist:1 nsf:1 notice:4 estimated:10 lazaric:6 per:8 affected:1 key:1 drawn:5 rjn:2 garivier:1 tenth:1 v1:3 graph:1 run:8 inverse:2 powerful:1 uncertainty:6 throughout:2 vb:3 bit:1 bound:23 pay:1 guaranteed:3 distinguish:1 display:1 fold:1 annual:3 bv:1 precisely:1 constraint:1 dominated:5 otterlo:1 aspect:3 min:25 optimality:1 relatively:1 transferred:2 according:2 rai:1 across:2 smaller:4 pan:2 terminates:1 beneficial:1 increasingly:1 kakade:5 making:1 lem:1 restricted:1 taken:1 eventually:1 mechanism:1 needed:1 know:1 singer:1 ge:2 fp7:1 upperconfidence:1 end:5 studying:1 available:5 apply:1 observe:2 v2:3 spectral:3 batch:3 encounter:1 eigen:1 florida:1 remaining:2 dirichlet:1 daum:1 exploit:4 restrictive:1 build:3 tensor:14 objective:5 move:1 question:1 strategy:4 interacts:1 diagonal:1 thank:1 berlin:1 collected:1 length:3 relationship:1 mini:3 providing:1 difficult:2 potentially:4 nord:2 negative:4 design:1 policy:1 unknown:5 perform:2 bianchi:4 upper:6 markov:1 discarded:7 finite:6 mate:1 acknowledge:1 projets:1 extended:1 team:1 rn:3 perturbation:2 thm:9 community:2 introduced:2 pair:1 required:1 c3:7 connection:1 c4:2 coherent:1 nip:2 gheshlaghi:1 able:4 address:1 usually:1 cavallanti:1 encompasses:1 tb:2 max:11 including:2 maxij:1 power:5 critical:2 rely:1 arm:76 improve:5 ready:1 prior:11 literature:2 understanding:1 epoch:1 kf:1 fully:1 permutation:1 mixed:1 interesting:3 allocation:1 foundation:1 agent:2 consistent:3 principle:1 foster:1 editor:1 share:1 compatible:2 supported:1 formal:1 bias:3 pulled:8 understand:1 side:1 lifelong:2 face:1 taking:1 bulletin:1 distributed:1 van:1 feedback:1 boundary:1 cumulative:6 computes:4 stuck:1 reinforcement:7 adaptive:1 avg:1 programme:1 far:1 transaction:1 bb:1 global:1 active:3 pasde:1 latent:3 transfer:35 robust:4 improving:2 alg:1 heidelberg:1 european:2 domain:1 aistats:2 main:3 azar:8 big:1 fig:11 referred:1 papaspiliopoulos:1 rithm:1 fashion:1 brunskill:2 sub:1 explicit:2 mkn:1 third:4 rtp:16 advertisement:1 rk:7 theorem:6 down:1 jt:4 showing:1 maxi:2 alt:1 evidence:1 exists:1 workshop:1 sequential:7 effectively:1 cumulated:1 corr:2 gap:8 logarithmic:1 simply:2 springer:3 corresponds:1 satisfies:1 relies:2 conditional:2 identity:3 towards:2 price:1 change:3 hard:1 acting:1 averaging:1 lemma:1 total:2 m3:13 ucb:49 rarely:1 select:1 support:2 dissimilar:1 avoiding:1 phenomenon:1 |
4,540 | 5,108 | Prior-free and prior-dependent regret bounds for
Thompson Sampling
S?ebastien Bubeck, Che-Yu Liu
Department of Operations Research and Financial Engineering,
Princeton University
[email protected], [email protected]
Abstract
We consider the stochastic multi-armed bandit problem with a prior distribution
on the reward distributions. We are interested in studying prior-free and priordependent regret bounds, very much in the same spirit than the usual distributionfree and distribution-dependent bounds for the non-Bayesian stochastic bandit.
We first show that Thompson Sampling attains an optimal prior-free bound in the
sense
? that for any prior distribution its Bayesian regret is bounded from above by
14 nK. This result is unimprovable in the sense that there exists a prior distribution
such that any algorithm has a Bayesian regret bounded from below by
?
1
nK.
We also study the case of priors for the setting of Bubeck et al. [2013]
20
(where the optimal mean is known as well as a lower bound on the smallest gap)
and we show that in this case the regret of Thompson Sampling is in fact uniformly
bounded over time, thus showing that Thompson Sampling can greatly take advantage of the nice properties of these priors.
1
Introduction
In this paper we are interested in the Bayesian multi-armed bandit problem which can be described
as follows. Let ?0 be a known distribution over some set ?, and let ? be a random variable distributed according to ?0 . For i ? [K], let (Xi,s )s?1 be identically distributed random variables
taking values in [0, 1] and which are independent conditionally on ?. Denote ?i (?) := E(Xi,1 |?).
Consider now an agent facing K actions (or arms). At each time step t = 1, . . . n, the agent pulls
an arm It ? [K]. The agent receives the reward Xi,s when he pulls arm i for the sth time. The arm
selection is based only on past observed rewards and potentially on an external source of randomness. More formally, let (UP
s )s?1 be an i.i.d. sequence of random variables uniformly distributed
s
on [0, 1], and let Ti (s) =
t=1 1It =i , then It is a random variable measurable with respect to
?(I1 , X1,1 , . . . , It?1 , XIt?1 ,TIt?1 (t?1) , Ut ). We measure the performance of the agent through the
Bayesian regret defined as
n
X
max ?i (?) ? ?It (?) ,
BRn = E
t=1
i?[K]
where the expectation is taken with respect to the parameter ?, the rewards (Xi,s )s?1 , and the
external source of randomness (Us )s?1 . We will also be interested in the individual regret Rn (?)
which is defined similarly except that ? is fixed (instead of being integrated over ?0 ). When it is
clear from the context we drop the dependency on ? in the various quantities defined above.
1
Given a prior ?0 the problem of finding an optimal strategy to minimize the Bayesian regret BRn
is a well defined optimization problem and as such it is merely a computational problem. On the
other hand the point of view initially developed in Robbins [1952] leads to a learning problem. In
this latter view the agent?s strategy must have a low regret Rn (?) for any ? ? ?. Both formulations
of the problem have a long history and we refer the interested reader to Bubeck and Cesa-Bianchi
[2012] for a survey of the extensive recent literature on the learning setting. In the Bayesian setting
a major breakthrough was achieved in Gittins [1979] where it was shown that when the prior
distribution takes a product form an optimal strategy is given by the Gittins indices (which are
relatively easy to compute). The product assumption on the prior means that the reward processes
(Xi,s )s?1 are independent across arms. In the present paper we are precisely interested in the
situations where this assumption is not satisfied. Indeed we believe that one of the strength of the
Bayesian setting is that one can incorporate prior knowledge on the arms in very transparent way.
A prototypical example that we shall consider later on in this paper is when one knows the distributions of the arms up to a permutation, in which case the reward processes are strongly dependent.
In general without the product assumption on the prior it seems hopeless (from a computational
perspective) to look for the optimal Bayesian strategy. Thus, despite being in a Bayesian setting,
it makes sense to view it as a learning problem and to evaluate the agent?s performance through its
Bayesian regret. In this paper we are particularly interested in studying the Thompson Sampling
strategy which was proposed in the very first paper on the multi-armed bandit problem Thompson
[1933]. This strategy can be described very succinctly: let ?t be the posterior distribution on ?
given the history Ht = (I1 , X1,1 , . . . , It?1 , XIt?1 ,TIt?1 (t?1) ) of the algorithm up to the beginning
of round t. Then Thompson Sampling first draws a parameter ?(t) from ?t (independently from the
past given ?t ) and it pulls It ? argmaxi?[K] ?i (?(t) ).
Recently there has been a surge of interest for this simple policy, mainly because of its flexibility to
incorporate prior knowledge on the arms, see for example Chapelle and Li [2011]. For a long time the
theoretical properties of Thompson Sampling remained elusive. The specific case of binary rewards
with a Beta prior is now very well understood thanks to the papers Agrawal and Goyal [2012a],
Kaufmann et al. [2012], Agrawal and Goyal [2012b]. However as we pointed out above here we
are interested in proving regret bounds for the more realistic scenario where one runs Thompson
Sampling with a hand-tuned prior distribution, possibly very different from a Beta prior. The first
result in that spirit was obtained very recently by Russo and Roy [2013]
? who showed that for any
prior distribution ?0 Thompson Sampling always satisfies BRn ? 5 nK log n. A similar bound
was proved in Agrawal and Goyal [2012b] for the specific case of Beta prior1 . Our first contribution
is to show in Section 2 that the extraneous logarithmic factor in these bounds can be removed by
using ideas reminiscent of the MOSS algorithm of Audibert and Bubeck [2009].
Our second contribution is to show that Thompson Sampling can take advantage of the properties of
some non-trivial priors to attain much better regret guarantees. More precisely in Section 2 and 3 we
consider the setting of Bubeck et al. [2013] (which we call the BPR setting) where ?? and ? > 0 are
known values such that for any ? ? ?, first there is a unique best arm {i? (?)} = argmaxi?[K] ?i (?),
and furthermore
?i? (?) (?) = ?? , and ?i (?) := ?i? (?) (?) ? ?i (?) ? ?, ?i 6= i? (?).
In other words the value of the best arm is known as well as a non-trivial lower bound on the gap
between the values of the best and second best arms. For this problem a new algorithm was proposed
in Bubeck et al. [2013] (which we call the BPR policy), and it was shown that the BPR policy satisfies
?
?
X log(?i (?)/?)
Rn (?) = O ?
log log(1/?)? , ?? ? ?, ?n ? 1.
?i (?)
?
i6=i (?)
Thus the BPR policy attains a regret uniformly bounded over time in the BPR setting, a feature that
standard bandit algorithms such as UCB of Auer et al. [2002] cannot achieve. It is natural to view
1
Note however that the result of Agrawal and Goyal [2012b] applies to the individual regret Rn (?) while
the result of Russo and Roy [2013] only applies to the integrated Bayesian regret BRn .
2
the assumptions of the BPR setting as a prior over the reward distributions and to ask what regret
guarantees attain Thompson Sampling in that situation. More precisely we consider Thompson Sampling with Gaussian reward distributions and uniform prior over the possible range of parameters.
We then prove individual regret bounds for any sub-Gaussian distributions (similarly to Bubeck et al.
[2013]). We obtain that Thompson Sampling uses optimally the prior information in the sense that
it also attains uniformly bounded over time regret. Furthermore as an added bonus we remove the
extraneous log-log factor of the BPR policy?s regret bound.
The results presented in Section 2 and 3 can be viewed as a first step towards a better understanding
of prior-dependent regret bounds for Thompson Sampling. Generalizing these results to arbitrary
priors is a challenging open problem which is beyond the scope of our current techniques.
2
Optimal prior-free regret bound for Thompson Sampling
In this section we prove the following result.
Theorem 1 For any prior distribution ?0 over reward distributions in [0, 1], Thompson Sampling
satisfies
?
BRn ? 14 nK.
Remark that the above result is unimprovable
? in the sense that there exist prior distributions ?0 such
1
that for any algorithm one has Rn ? 20 nK (see e.g. [Theorem 3.5, Bubeck and Cesa-Bianchi
[2012]]). This theorem also implies an optimal rate of identification for the best arm, see
Bubeck et al. [2009] for more details on this.
Proof We decompose the proof into three steps. We denote i? (?) ? argmaxi?[K] ?i (?), in
particular one has It = i? (?t ).
Step 1: rewriting of the Bayesian regret in terms of upper confidence bounds. This step is given
by [Proposition 1, Russo and Roy [2013]] which we reprove for sake of completeness. Let Bi,t be a
random variable measurable with respect to ?(Ht ). Note that by definition ?(t) and ? are identically
distributed conditionally on Ht . This implies by the tower rule:
EBi? (?),t = EBi? (?(t) ),t = EBIt ,t .
Thus we obtain:
E ?i? (?) (?) ? ?It (?) = E ?i? (?) (?) ? Bi? (?),t + E (BIt ,t ? ?It (?)) .
Inspired by the MOSS strategy of Audibert and Bubeck [2009] we will now take
v
u
n
u log
t + KTi (t?1)
Bi,t = ?
bi,Ti (t?1) +
,
Ti (t ? 1)
q
Ps
where ?
bi,s = 1s t=1 Xi,t , and log+ (x) = log(x)1x?1 . In the following we denote ?0 = 2 K
n.
From now on we work conditionally on ? and thus we drop all the dependency on ?.
Step 2: control of E ?i? (?) (?) ? Bi? (?),t |? . By a simple integration of the deviations one has
Z 1
P(?i? ? Bi? ,t ? u)du.
E (?i? ? Bi? ,t ) ? ?0 +
?0
Next we extract the following inequality from Audibert and Bubeck [2010] (see p2683?2684), for
any i ? [K],
r
4K
1
n
P(?i ? Bi,t ? u) ?
log
u +
.
2
2
nu
K
nu /K ? 1
3
Now an elementary integration gives
r
r
r
1
r
Z 1
n
n
n
K
4K
4K
4K
log
u du = ?
log e
u
log e
?0 = 2(1+log 2)
,
?
2
K
nu
K
n?0
K
n
?0 nu
?0
and
Z 1
"
r
1
1 K
du
=
?
log
nu2 /K ? 1
2 n
pn
u
pK
n
Ku
+1
!#1
1
?
2
r
?1
?0
?0
Thus we proved: E ?i? (?) (?) ? Bi? (?),t |? ? 2 + 2(1 + log 2) +
Step 3: control of
Pn
E
log 3
2
?0
pK
n
K ?0
q
K
n
+1
!
=
?1
q
?6 K
n.
log 3
2
r
K
.
n
E (BIt ,t ? ?It (?)|?). We start again by integrating the deviations:
Z +? X
n
P(BIt ,t ? ?It ? u)du.
(BIt ,t ? ?It ) ? ?0 n +
t=1
n
X
t=1
pn
K
log
n
?0
t=1
Next we use the following simple inequality:
n
X
t=1
which implies
n
X
t=1
1{BIt ,t ? ?It
?
?
s
K
n X
?
?
n
X
log+ Ks
bi,s +
1 ?
? u} ?
? ?i ? u ,
?
?
s
P(BIt ,t ? ?It ? u) ?
Now for u ? ?0 let s(u) = ?3 log
c=1?
?1 .
3
s=1 i=1
n
K X
X
i=1 s=1
nu2
K
?
bi,s +
P ??
s
n
Ks
log+
s
?
? ?i ? u? .
/u2 ? where ?x? is the smallest integer large than x. Let
It is is easy to see that one has:
2
?
?
s
n
n
n
3 log nu
X
X
log+ Ks
K
bi,s +
P ??
P (b
?i,s ? ?i ? cu) .
+
? ?i ? u? ?
s
u2
s=1
s=s(u)
Using an integration already done in Step 2 we have
r
r
Z +? 3 log nu2
K
n
n
? 3(1 + log(2))
? 5.1
.
2
u
K
K
?0
Next using Hoeffding?s inequality and the fact that the rewards are in [0, 1] one has for u ? ?0
n
n
X
X
exp(?12c2 log 2)
1u?1/c .
exp(?2sc2 u2 )1u?1/c ?
P (b
?i,s ? ?i ? cu) ?
1 ? exp(?2c2 u2 )
s=s(u)
s=s(u)
Now using that 1 ? exp(?x) ? x ? x2 /2 for x ? 0 one obtains
Z 1/c
Z 1/(2c)
Z 1/c
1
1
1
du =
du +
du
2 u2 )
2 u2 )
2 u2 )
1
?
exp(?2c
1
?
exp(?2c
1
?
exp(?2c
?0
?0
1/(2c)
Z 1/(2c)
1
1
?
du +
2 u2 ? 2c4 u4
2c
2c(1
?
exp(?1/2))
?0
Z 1/(2c)
2
1
?
du +
2
2
3c u
2c(1 ? exp(?1/2))
?0
4
1
2
?
+
=
2
3c ?0
3c 2c(1 ? exp(?1/2))
r
n
.
? 1.9
K
4
Putting the pieces together we proved
E
n
X
t=1
?
(BIt ,t ? ?It ) ? 7.6 nK,
which concludes the proof together with the results of Step 1 and Step 2.
3
Thompson Sampling in the two-armed BPR setting
Following [Section 2, Bubeck et al. [2013]] we consider here the two-armed bandit problem with
2
sub-Gaussian reward distributions (that is they satisfy Ee?(X??) ? e? /2 for all ? ? R) and such
that one reward distribution has mean ?? and the other one has mean ?? ? ? where ?? and ? are
known values.
In order to derive the Thompson Sampling strategy for this problem we further assume that the
reward distributions are in fact Gaussian with variance 1. In other words let ? = {?1 , ?2 }, ?0 (?1 ) =
?0 (?2 ) = 1/2, and under ?1 one has X1,s ? N (?? , 1) and X2,s ? N (?? ? ?, 1) while under ?2
one has X2,s ? N (?? , 1) and X1,s ? N (?? ? ?, 1). Then a straightforward computation (using
Bayes rule and induction) shows that one has for some normalizing constant c > 0:
?
?
T1 (t?1)
T2 (t?1)
X
X
1
1
(?? ? X1,s )2 ?
(?? ? ? ? X2,s )2 ? ,
?t (?1 ) = c exp ??
2 s=1
2 s=1
?
?
T1 (t?1)
T2 (t?1)
X
X
1
1
?t (?2 ) = c exp ??
(?? ? ? ? X1,s )2 ?
(?? ? X2,s )2 ? .
2 s=1
2 s=1
Recall that Thompson Sampling draws ?(t) from ?t and then pulls the best arm for the environment
?(t) . Observe that under ?1 the best arm is arm 1 and under ?2 the best arm is arm 2. In other words
Thompson Sampling draws It at random with the probabilities given by the posterior ?t . This leads
to a general algorithm for the two-armed BPR setting with sub-Gaussian reward distributions that
we summarize in Figure 1. The next result shows that it attains optimal performances in this setting
up to a numerical constant (see Bubeck et al. [2013] for lower bounds), for any sub-Gaussian reward
distribution (not necessarily Gaussian) with largest mean ?? and gap ?.
For rounds t ? {1, 2}, select arm It = t.
For each round t = 3, 4, . . . play It at random from pt where
?
?
T1 (t?1)
T2 (t?1)
X
X
1
1
(?? ? X1,s )2 ?
(?? ? ? ? X2,s )2 ? ,
pt (1) = c exp ??
2 s=1
2 s=1
?
?
T1 (t?1)
T2 (t?1)
X
X
1
1
pt (2) = c exp ??
(?? ? ? ? X1,s )2 ?
(?? ? X2,s )2 ? ,
2 s=1
2 s=1
and c > 0 is such that pt (1) + pt (2) = 1.
Figure 1: Policy inspired by Thompson Sampling for the two-armed BPR setting.
Theorem 2 The policy of Figure 1 has regret bounded as Rn ? ? +
5
578
? ,
uniformly in n.
4
3
1
2
Rescaled regret: ?Rn
3
2
1
Rescaled regret: ?Rn
4
5
?* = 0, ? = 0.05
5
?* = 0, ? = 0.2
0
500
1000
1500
2000
2500
Policy 1 from Bubeck et al.[2013]
Policy of Figure 1
0
0
Policy 1 from Bubeck et al.[2013]
Policy of Figure 1
3000
0
5000
10000
Time n
15000
20000
Time n
Figure 2: Empirical comparison of the policy of Figure 1 and Policy 1 of Bubeck et al. [2013] on Gaussian
reward distributions with variance 1.
Note that we did not try to optimize the numerical constant in the above bound. Figure 2 shows
an empirical comparison of the policy of Figure 1 with Policy 1 of Bubeck et al. [2013]. Note in
particular that a regret bound of order 16/? was proved for the latter algorithm and the (limited)
numerical simulation presented here suggests that Thompson Sampling outperforms this strategy.
?
Proof Without loss of generality
Ps we assume that arm 1 is the optimal arm, that is ?1 = ? and
1
?
b1,s = ?1 ? ?
b1,s and ?
b2,s = ?
b2,s ? ?2 . Note that large
?2 = ? ? ?. Let ?
bi,s = s t=1 Xi,t , ?
(positive) values of ?
b1,s or ?
b2,s might mislead the algorithm into bad decisions, and we will need to
control what happens in various regimes for these ? coefficients. We decompose the proof into three
steps.
Step 1. This first step will be useful in the rest of the analysis, it shows how the probability ratio of
a bad pull over a good pull evolves as a function of the ? coefficients introduced above. One has:
pt (2)
pt (1)
=
=
=
=
?
exp ??
1
2
T1 (t?1)
X
s=1
2
(?2 ? X1,s ) ? (?1 ? X1,s )
2
?
1
2
T2 (t?1)
X
s=1
2
(?1 ? X2,s ) ? (?2 ? X2,s )
2
?
?
T2 (t ? 1) 2
T1 (t ? 1) 2
2
2
?2 ? ?1 ? 2(?2 ? ?1 )b
?1,T1 (t?1) ?
?1 ? ?2 ? 2(?1 ? ?2 )b
?2,T2 (t?1)
exp ?
2
2
T1 (t ? 1)
T2 (t ? 1)
2
2
exp ?
? ? 2?(?1 ? ?
b1,T1 (t?1) ) ?
? ? 2?(b
?2,T2 (t?1) ? ?2 )
2
2
!
t?2
exp ?
+ T1 (t ? 1)?b
?1,T1 (t?1) + T2 (t ? 1)?b
?2,T2 (t?1) .
2
Step 2. We decompose the regret Rn as follows:
Rn
?
=
=
1+E
n
X
t=3
n
X
1{It = 2}
n
X
?
?
?
1 ?
b2,T2 (t?1) > , It = 2 + E
1 ?
b2,T2 (t?1) ? , ?
1+E
b1,T1 (t?1) ? , It = 2
4
4
4
t=3
t=3
n
X
?
?
1 ?
b2,T2 (t?1) ? , ?
+E
b1,T1 (t?1) > , It = 2 .
4
4
t=3
We use Hoeffding?s inequality to control the first term:
X
n
n
n
X
X
?
s?2
32
?
1 ?
b2,T2 (t?1) > , It = 2 ? E
E
exp ?
?
? 2.
1 ?
b2,s >
4
4
32
?
t=3
s=1
s=1
6
For the second term, using the rewriting of Step 1 as an upper bound on pt (2), one obtains:
E
n
X
1
t=3
?
b2,T2 (t?1) ?
?
?
,?
b1,T1 (t?1) ?
, It = 2
4
4
=
n
X
E
t=3
?
n
X
pt (2)1
exp
t?2
4
?
t=3
?
?
,?
b1,T1 (t?1) ?
4
4
?
b2,T2 (t?1) ?
!
?
4
.
?2
The third term is more difficult to control, and we further decompose the corresponding event as
follows:
?
?
?
b2,T2 (t?1) ? , ?
b1,T1 (t?1) > , It = 2
4
4
?
?
b2,T2 (t?1) ? , It = 2, T1 (t ? 1) ? t/4 .
? ?
b1,T1 (t?1) > , T1 (t ? 1) > t/4 ? ?
4
4
The cumulative probability of the first event in the above decomposition is easy to control thanks to
Hoeffding?s maximal inequality2 which states that for any m ? 1 and x > 0 one has
x2
.
P(? 1 ? s ? m s.t. s ?
b1,s ? x) ? exp ?
2m
Indeed this implies
t?2
?t
?
? exp ?
,
b1,s >
P ?
b1,T1 (t?1) > , T1 (t ? 1) > t/4 ? P ? 1 ? s ? t s.t. s ?
4
16
512
and thus
n
X
?
512
1 ?
b1,T1 (t?1) > , T1 (t ? 1) > t/4 ? 2 .
E
4
?
t=3
It only remains to control the term
E
n
X
1
t=3
?
b2,T2 (t?1) ?
?
, It = 2, T1 (t ? 1) ? t/4
4
=
n
X
t=3
?
n
X
E
pt (2)1
E exp
t=3
?
?
, T1 (t ? 1) ? t/4
4
!
?
b2,T2 (t?1) ?
t?2
+ ? max sb
?1,s
1?s?t/4
4
,
where the last inequality follows from Step 1. The last step is devoted to bounding from above this
last term.
Step 3. By integrating the deviations and using again Hoeffding?s maximal inequality one obtains
E exp ? max sb
?1,s
1?s?t/4
? 1+
Z
+?
P
1
?1,s
max sb
t
1?s? 4
log x
?
?
!
dx ? 1+
Z
+?
1
2(log x)2
dx.
exp ?
?2 t
Now, straightforward computation gives
n
X
t=3
exp
?
t?2
4
!
1+
Z
+?
exp
1
?
2(log x)2
?2 t
!
dx
!
?
?
?
?
n
X
exp
?
t=3
4
+
?2
Z
t?2
4
s
+?
0
!?
?1 +
s
??2 t
exp
2
t?2
8
??2 t
exp
2
?
t?2
8
!
!?
?
dt
? Z +?
?
4
16 ?
u exp(?u) du
+
?2
?2
0
30
.
?2
which concludes the proof by putting this together with the results of the previous step.
2
It is an easy exercise to verify that Azuma-Hoeffding holds for martingale differences with sub-Gaussian
increments, which implies Hoeffding?s maximal inequality for sub-Gaussian distributions.
7
4
Optimal strategy for the BPR setting inspired by Thompson Sampling
In this section we consider the general BPR setting. That is the reward distributions are sub-Gaussian
2
(they satisfy Ee?(X??) ? e? /2 for all ? ? R), one reward distribution has mean ?? , and all the other
means are smaller than ?? ? ? where ?? and ? are known values.
Similarly to the previous section we assume that the reward distributions are Gaussian with variance
1 for the derivation of the Thompson Sampling strategy (but we do not make this assumption for the
analysis of the resulting algorithm). Then the set of possible parameters is described as follows:
K
?
?
? = ?K
i=1 ?i where ?i = {? ? R s.t. ?i = ? and ?j ? ? ? ? for all j 6= i}.
Assuming a uniform prior over the index of the best arm, and a prior ? over the mean of a suboptimal
arm one obtains by Bayes rule that the probability density function of the posterior is given by:
?
?
(t?1)
K TjX
K
X
Y
1
d?t (?) ? exp ??
(Xj,s ? ?j )2 ?
d?(?j ).
2 j=1 s=1
?
j=1,j6=i (?)
Now remark that with Thompson Sampling arm i is played at time t if and only if ?(t) ? ?i . In other
words It is played at random from probability pt where
pt (i) = ?t (?i )
?
?
?
?
?
Z ?? ??
Ti (t?1)
Y
X
1
1
?
(Xi,s ? ?? )2 ?
exp ??
exp ??
2 s=1
2
??
j6=i
PTi (t?1)
(Xi,s ? ?? )2
exp ? 12 s=1
.
R ?? ??
PTi (t?1)
(Xi,s ? v)2 d?(v)
exp ? 21 s=1
??
?
Tj (t?1)
X
s=1
?
?
(Xj,s ? v)2 ? d?(v)?
Taking inspiration from the above calculation we consider the following policy, where ? is the
Lebesgue measure and we assume a slightly larger value for the variance (this is necessary for the
proof).
For rounds t ? [K], select arm It = t.
For each round t = K + 1, K + 2, . . . play It at random from pt where
PTi (t?1)
exp ? 13 s=1
(Xi,s ? ?? )2
,
pt (i) = c R ?? ??
PTi (t?1)
1
2 dv
(X
?
v)
exp
?
i,s
s=1
3
??
and c > 0 is such that
PK
i=1
pt (i) = 1.
Figure 3: Policy inspired by Thompson Sampling for the BPR setting.
The following theorem shows that this policy attains the best known performance for the BPR setting,
shaving off a log-log term in the regret bound of the BPR policy.
P
80+log(?i /?)
Theorem 3 The policy of Figure 3 has regret bounded as Rn ?
,
i:?i >0 ?i +
?i
uniformly in n.
The proof of this result is fairly technical and it is deferred to the supplementary material.
8
References
S. Agrawal and N. Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. In
Proceedings of the 25th Annual Conference on Learning Theory (COLT), 2012a.
S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling, 2012b.
arXiv:1209.3353.
J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009.
J.-Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. Journal
of Machine Learning Research, 11:2635?2686, 2010.
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning Journal, 47(2-3):235?256, 2002.
S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit
problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009.
S. Bubeck, V. Perchet, and P. Rigollet. Bounded regret in stochastic multi-armed bandits. In Proceedings of the 26th Annual Conference on Learning Theory (COLT), 2013.
O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Advances in Neural
Information Processing Systems (NIPS), 2011.
J.C. Gittins. Bandit processes and dynamic allocation indices. Journal Royal Statistical Society
Series B, 14:148?167, 1979.
E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite-time
analysis. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory
(ALT), 2012.
H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527?535, 1952.
D. Russo and B. Van Roy. Learning to optimize via posterior sampling, 2013. arXiv:1301.2609.
W. Thompson. On the likelihood that one unknown probability exceeds another in view of the
evidence of two samples. Bulletin of the American Mathematics Society, 25:285?294, 1933.
9
| 5108 |@word cu:2 seems:1 nd:1 open:1 simulation:1 decomposition:1 liu:1 series:1 tuned:1 past:2 outperforms:1 current:1 dx:3 must:1 reminiscent:1 numerical:3 realistic:1 remove:1 drop:2 beginning:1 completeness:1 c2:2 beta:3 prove:2 indeed:2 surge:1 multi:7 inspired:4 armed:11 bounded:8 bonus:1 what:2 developed:1 finding:1 guarantee:2 ti:4 control:7 t1:25 positive:1 engineering:1 understood:1 despite:1 brn:5 might:1 k:3 distributionfree:1 suggests:1 challenging:1 limited:1 range:1 bi:14 russo:4 unique:1 regret:33 goyal:6 empirical:3 attain:2 word:4 confidence:1 integrating:2 cannot:1 selection:1 context:1 optimize:2 measurable:2 elusive:1 straightforward:2 independently:1 thompson:32 survey:1 mislead:1 pure:1 rule:3 pull:6 financial:1 proving:1 increment:1 pt:15 play:2 us:1 trend:1 roy:4 particularly:1 u4:1 perchet:1 observed:1 removed:1 rescaled:2 environment:1 reward:20 dynamic:1 tit:2 bpr:15 various:2 derivation:1 argmaxi:3 larger:1 supplementary:1 fischer:1 advantage:2 sequence:1 agrawal:6 product:3 maximal:3 flexibility:1 achieve:1 ebi:2 p:2 gittins:3 derive:1 implies:5 stochastic:5 exploration:1 material:1 transparent:1 decompose:4 proposition:1 elementary:1 hold:1 exp:38 scope:1 algorithmic:2 major:1 smallest:2 robbins:2 largest:1 always:1 gaussian:12 pn:3 xit:2 likelihood:1 mainly:1 greatly:1 adversarial:1 attains:5 sense:5 dependent:4 sb:3 integrated:2 initially:1 bandit:13 interested:7 i1:2 colt:3 extraneous:2 breakthrough:1 integration:3 fairly:1 sampling:31 yu:1 look:1 t2:21 prior1:1 individual:3 lebesgue:1 interest:1 unimprovable:2 evaluation:1 deferred:1 tj:1 devoted:1 partial:1 necessary:1 stoltz:1 theoretical:1 korda:1 deviation:3 uniform:2 optimally:1 dependency:2 thanks:2 density:1 international:2 off:1 together:3 again:2 cesa:4 satisfied:1 possibly:1 hoeffding:6 external:2 american:2 li:2 b2:14 coefficient:2 satisfy:2 audibert:5 piece:1 later:1 view:5 try:1 start:1 bayes:2 contribution:2 minimize:1 kaufmann:2 who:1 variance:4 bayesian:13 identification:1 monitoring:1 j6:2 randomness:2 history:2 definition:1 proof:8 proved:4 ask:1 recall:1 knowledge:2 ut:1 auer:2 dt:1 formulation:1 done:1 strongly:1 generality:1 furthermore:2 hand:2 receives:1 believe:1 verify:1 inspiration:1 conditionally:3 round:5 recently:2 rigollet:1 he:1 refer:1 multiarmed:1 rd:1 mathematics:2 similarly:3 pointed:1 i6:1 chapelle:2 posterior:4 recent:1 showed:1 perspective:1 scenario:1 inequality:7 binary:1 sc2:1 exceeds:1 technical:1 calculation:1 long:2 expectation:1 arxiv:2 achieved:1 source:2 rest:1 spirit:2 call:2 integer:1 ee:2 identically:2 easy:4 xj:2 nonstochastic:1 suboptimal:1 idea:1 action:1 remark:2 useful:1 clear:1 exist:1 shall:1 putting:2 rewriting:2 ht:3 asymptotically:1 merely:1 run:1 reader:1 draw:3 decision:1 bit:7 bound:21 played:2 annual:3 strength:1 precisely:3 x2:10 sake:1 aspect:1 relatively:1 department:1 according:1 across:1 smaller:1 slightly:1 pti:4 sth:1 evolves:1 happens:1 dv:1 taken:1 remains:1 know:1 studying:2 operation:1 observe:1 society:3 added:1 quantity:1 already:1 strategy:11 usual:1 che:1 tower:1 trivial:2 induction:1 assuming:1 index:3 ratio:1 difficult:1 potentially:1 design:1 ebastien:1 policy:22 unknown:1 bianchi:4 upper:2 finite:2 situation:2 rn:11 arbitrary:1 princeton:3 introduced:1 extensive:1 c4:1 nu:5 nip:1 beyond:1 below:1 azuma:1 regime:1 summarize:1 max:4 royal:1 event:2 natural:1 arm:24 minimax:2 concludes:2 extract:1 moss:2 prior:30 nice:1 literature:1 understanding:1 loss:1 permutation:1 prototypical:1 allocation:1 facing:1 foundation:1 kti:1 agent:6 hopeless:1 succinctly:1 last:3 free:4 taking:2 bulletin:2 munos:2 distributed:4 van:1 cumulative:1 obtains:4 b1:14 xi:11 ku:1 du:10 necessarily:1 did:1 pk:3 bounding:1 x1:10 martingale:1 sub:7 exercise:1 third:1 theorem:6 remained:1 bad:2 specific:2 showing:1 alt:2 normalizing:1 evidence:1 exists:1 sequential:1 nk:6 gap:3 generalizing:1 logarithmic:1 bubeck:22 u2:8 tjx:1 applies:2 satisfies:3 viewed:1 towards:1 except:1 uniformly:6 ucb:1 formally:1 select:2 latter:2 incorporate:2 evaluate:1 sbubeck:1 |
4,541 | 5,109 | Two-Target Algorithms for Infinite-Armed Bandits
with Bernoulli Rewards
Thomas Bonald?
Department of Networking and Computer Science
Telecom ParisTech
Paris, France
[email protected]
Alexandre Prouti`ere??
Automatic Control Department
KTH
Stockholm, Sweden
[email protected]
Abstract
We consider an infinite-armed bandit problem with Bernoulli rewards. The mean
rewards are independent, uniformly distributed over [0, 1]. Rewards 0 and 1 are
referred to as a success and a failure, respectively. We propose a novel algorithm
where the decision to exploit any arm is based on two successive targets, namely,
the total number of successes until the first failure and until the first m failures,
respectively, where m is a fixed
? parameter. This two-target algorithm achieves a
long-term average regret in 2n for a large parameter m and a known time horizon n. This regret is optimal and?strictly less than the regret achieved by the best
known algorithms, which is in 2 n. The results are extended to any mean-reward
distribution whose support contains 1 and to unknown time horizons. Numerical
experiments show the performance of the algorithm for finite time horizons.
1
Introduction
Motivation. While classical multi-armed bandit problems assume a finite number of arms [9],
many practical situations involve a large, possibly infinite set of options for the player. This is the
case for instance of on-line advertisement and content recommandation, where the objective is to
propose the most appropriate categories of items to each user in a very large catalogue. In such
situations, it is usually not possible to explore all options, a constraint that is best represented by a
bandit problem with an infinite number of arms. Moreover, even when the set of options is limited,
the time horizon may be too short in practice to enable the full exploration of these options. Unlike
classical algorithms like UCB [10, 1], which rely on a initial phase where all arms are sampled
once, algorithms for infinite-armed bandits have an intrinsic stopping rule in the number of arms to
explore. We believe that this provides useful insights into the design of efficient algorithms for usual
finite-armed bandits when the time horizon is relatively short.
Overview of the results. We consider a stochastic infinite-armed bandit with Bernoulli rewards,
the mean reward of each arm having a uniform distribution over [0, 1]. This model is representative
of a number of practical situations, such as content recommandation systems with like/dislike feedback and without any prior information on the user preferences. We propose ?
a two-target algorithm
based on some fixed parameter m that achieves a long-term average regret in 2n for large m and a
large known time horizon n. We prove that this
? regret is optimal. The anytime version of this algorithm achieves a long-term average regret in 2 n for unknown time horizon n, which we conjecture
to be also optimal. The results are extended to any mean-reward distribution whose support contains
1. Specifically, if the probability that the mean reward exceeds u is equivalent to ?(1 ? u)? when
?
?
The authors are members of the LINCS, Paris, France. See www.lincs.fr.
Alexandre Prouti`ere is also affiliated to INRIA, Paris-Rocquencourt, France.
1
?
u ? 1? , the two-target algorithm achieves a long-term average regret in C(?, ?)n ?+1 , with some
explicit constant C(?, ?) that depends on whether the time horizon is known or not. This regret is
provably optimal when the time horizon is known. The precise statements and proofs of these more
general results are given in the appendix.
Related work. The stochastic infinite-armed bandit problem has first been considered in a general
setting by Mallows and Robbins [12] and then in the particular case of Bernoulli rewards by Herschkorn, Pek?oz and Ross [6]. The proposed algorithms are first-order optimal in the sense that they
minimize the ratio Rn /n for large n, where Rn is the regret after n time steps. In the considered
setting of Bernoulli rewards with mean rewards uniformly distributed over [0, 1], this means that the
ratio Rn /n tends to 0 almost surely. We are interested in second-order optimality, namely, in minimizing the equivalent of Rn for large n. This issue is addressed
? by Berry et. al. [2], who propose
various algorithms achieving a long-term
average
regret
in
2
n, conjecture that this regret is opti?
close to
mal
? and provide a lower bound in 2n. Our algorithm achieves a regret that is arbitrarily
?
2n, which invalidates the conjecture. We also provide a proof of the lower bound in 2n since that
given in [2, Theorem 3] relies on the incorrect argument that the number of explored arms and the
mean rewards of these arms are independent random variables1 ; the extension to any mean-reward
distribution [2, Theorem 11] is based on the same erroneous argument2 .
The algorithms proposed by Berry et. al. [2] and applied in [11, 4, 5, 7] to various mean-reward
distributions are variants of the 1-failure strategy
where each arm is played until the first failure,
?
called a run. For instance,
the
non-recalling
n-run
policy consists in exploiting the first arm giving
?
a run larger than ? n. For a uniform mean-reward distribution over [0, 1], the average number of
explored arms is n and ?
the selected arm is exploited
? for the equivalent of n time steps with an
expected failure rate of 1/ n, yielding the regret of 2 n. We introduce a second target to improve
the expected failure rate of the selected arm, at the expensep
of a slightly more expensive exploration
phase. Specifically, we show
that
it
is
optimal
to
explore
n/2 arms on average, resulting in the
?
expected ?
failure rate 1/ 2n of the exploited arm, for the equivalent of n time steps, hence the
regret of 2n. For unknown horizon times, anytime versions of the algorithms of Berry ?
et. al. [2]
are proposed by Teytaud, Gelly and Sebag in [13] and proved to achieve a regret in?O( n). We
show that the anytime version of our algorithm achieves a regret arbitrarily close to 2 n, which we
conjecture to be optimal.
Our results extend to any mean-reward distribution whose support contains 1, the regret depending
on the characteristics of this distribution around 1. This problem has been considered in the more
general setting of bounded rewards by Wang, Audibert and Munos [15]. When the time horizon
is known, their algorithms consist in exploring a pre-defined set of K arms, which depends on
the parameter ? mentioned above, using variants of the UCB policy [10, 1]. In the present case
of Bernoulli rewards and mean-reward distributions whose support contains 1, the corresponding
?
regret is in n ?+1 , up to logarithmic terms coming from the exploration of the K arms, as in usual
finite-armed bandits algorithms [9]. The nature of our algorithm is very different in that it is based
on a stopping rule in the exploration phase that depends on the observed rewards. This does not only
remove the logarithmic terms in the regret but also achieves the optimal constant.
2
Model
We consider a stochastic multi-armed bandit with an infinite number of arms. For any k = 1, 2, . . .,
the rewards of arm k are Bernoulli with unknown parameter ?k . We refer to rewards 0 and 1 as a
failure and a success, respectively, and to a run as a consecutive sequence of successes followed by
a failure. The mean rewards ?1 , ?2 , . . . are themselves random, uniformly distributed over [0, 1].
1
Specifically, it is assumed that for any policy, the mean rewards of the explored arms have a uniform
distribution over [0, 1], independently of the number of explored arms. This is incorrect. For the 1-failure
policy for instance, given that only one arm has been explored until time n, the mean reward of this arm has a
beta distribution with parameters
1, n.
p
2
This lower bound is 4 n/3 for a?
beta distribution with parameters 1/2, 1, see [11], while our algorithm
achieves a regret arbitrarily close to 2 n in this case, since C(?, ?) = 2 for ? = 1/2 and ? = 1, see the
appendix. Thus the statement of [2, Theorem 11] is false.
2
At any time t = 1, 2, . . ., we select some arm It and receive the corresponding reward Xt , which
is a Bernoulli random variable with parameter ?It . We take I1 = 1 by convention. At any
time t = 2, 3, . . ., the arm selection only depends on previous arm selections and rewards; formally, the random variable It is Ft?1 -mesurable, where Ft denotes the ?-field generated by the
set {I1 , X1 , . . . , It , Xt }. Let Kt be the number of arms selected until time t. Without any loss of
generality, we assume that {I1 , . . . , It } = {1, 2, . . . , Kt } for all t = 1, 2, . . ., i.e., new arms are
selected sequentially. We also assume that It+1 = It whenever Xt = 1: if the selection of arm It
gives a success at time t, the same arm is selected at time t + 1.
The objective isP
to maximize the cumulative reward or, equivalently, to minimize the regret defined
n
by Rn = n ? t=1 Xt . Specifically, we focus on the average regret E(Rn ), where expectation
is taken over all random variables, including the sequence of mean rewards ?1 , ?2 , . . .. The time
horizon n may be known or unknown.
3
Known time horizon
3.1
Two-target algorithm
The two-target algorithm consists in exploring new arms until two successive targets `1 and `2 are
reached, in which case the current arm is exploited until the time horizon n. The first target aims
at discarding ?bad? arms while the second aims at selecting a ?good? arm. Specifically, using the
names of the variables indicated in the pseudo-code below, if the length L of the first run of the
current arm I is less than `1 , this arm is discarded and a new arm is selected; otherwise, arm I is
pulled for m ? 1 additional runs and exploited until time n if the total length L of the m runs is at
least `2 , where m ? 2 is a fixed parameter
of the algorithm.
? 1 below that,
p
p We prove in Proposition
for large m, the target values3 `1 = b 3 n2 c and `2 = bm n2 c provide a regret in 2n.
Algorithm 1: Two-target algorithm with known time horizon n.
Parameters: m, n
Function:
Explore
I ? I + 1, L ? 0, M ? 0
Algorithm:
p
p
`1 = b 3 n2 c, `2 = bm n2 c
I?0
Explore
Exploit ? false
forall the t = 1, 2, . . . , n do
Get reward X from arm I
if not Exploit then
if X = 1 then
L?L+1
else
M ?M +1
if M = 1 then
if L < `1 then
Explore
else if M = m then
if L < `2 then
Explore
else
Exploit ? true
?
The first target could actually be any function `1 of the time horizon n such that `1 ? +? and `1 / n ? 0
when n ? +?. Only the second target is critical.
3
3
3.2
Regret analysis
p
p
Proposition 1 The two-target algorithm with targets `1 = b 3 n2 c and `2 = bm n2 c satisfies:
m
m2
`2 + 1
`2 ? m + 2
1
m+1
?n ?
, E(Rn ) ? m +
2+
+2
.
2
m
`2 ? `1 ? m + 2
m
`1 + 1
In particular,
E(Rn ) ?
1
lim sup ?
? 2+ ? .
n
n?+?
m 2
Proof. Note that Let U1 = 1 if arm 1 is used until time n and U1 = 0 otherwise. Denote by M1 the
total number of 0?s received from arm 1. We have:
E(Rn ) ? P (U1 = 0)(E(M1 |U1 = 0) + E(Rn )) + P (U1 = 1)(m + nE(1 ? ?1 |U1 = 1)),
so that:
E(M1 |U1 = 0)
+ m + nE(1 ? ?1 |U1 = 1).
(1)
E(Rn ) ?
P (U1 = 1)
Let Nt be the number of 0?s received from arm 1 until time t when this arm is played until time t.
2
Note that n ? m2 implies n ? `2 . Since P (N`1 = 0|?1 = u) = u`1 , the probability that the first
target is achieved by arm 1 is given by:
Z 1
1
P (N`1 = 0) =
u`1 du =
.
`1 + 1
0
Similarly,
P (N`2 ?`1 < m|?1 = u) =
m?1
X
j=0
`2 ? `1 `2 ?`1 ?j
u
(1 ? u)j ,
j
so that the probability that arm 1 is used until time n is given by:
Z 1
P (U1 = 1) =
P (N`1 = 0|?1 = u)P (N`2 ?`1 < m|?1 = u)du,
=
0
m?1
X
j=0
We deduce:
m
`2 + 1
(`2 ? `1 )! (`2 ? j)!
.
(`2 ? `1 ? j)! (`2 + 1)!
`2 ? `1 ? m + 2
`2 ? m + 2
m
? P (U1 = 1) ?
m
.
`2 + 1
(2)
Moreover,
E(M1 |U1 = 0) = 1 + (m ? 1)P (N`1 = 0|U1 = 0) ? 1 + (m ? 1)
where the last inequality follows from (2) and the fact that `2 ?
P (N`1 = 0)
m+1
?1+2
,
P (U1 = 0)
`1 + 1
m2
2 .
It remains to calculate E(1 ? ?1 |U1 = 1). Since:
m?1
X `2 ? `1
P (U1 = 1|?1 = u) =
u`2 ?j (1 ? u)j ,
j
j=0
we deduce:
E(1 ? ?1 |U1 = 1)
=
1
P (U1 = 1)
Z
1 m?1
X
0
j=0
`2 ? `1 `2 ?j
u
(1 ? u)j+1 du,
j
m?1
X
(`2 ? `1 )! (`2 ? j)!
(j + 1),
P (U1 = 1) j=0 (`2 ? `1 ? j)! (`2 + 2)!
1
m(m + 1)
1
1
?
?
1+
.
P (U1 = 1) 2(`2 + 1)(`2 + 2)
P (U1 = 1)
m
The proof then follows from (1) and (2).
=
1
4
3.3
Lower bound
The following result shows that the two-target algorithm is asymptotically optimal (for large m).
Theorem 1 For any algorithm with known time horizon n,
E(Rn ) ?
lim inf ?
? 2.
n?+?
n
Proof. We present the main ideas of the proof. The details are given in the appendix. Assume an
oracle reveals the parameter of each arm after the first failure of this arm. With this information,
the optimal policy explores a random number of arms, each until the first failure, then plays only
one of these arms until time n. Let ? be the parameter of the best known arm at time t. Since the
probability that any new arm is better than this arm is 1 ? ?, the mean cost of exploration to find
1
a better arm is 1??
. The corresponding mean reward has a uniform distribution over [?, 1] so that
the mean gain of exploitation is less than (n ? t) 1??
2 (it is not equal to this quantity due to the time
q
2
spent in exploration). Thus if 1 ? ? < n?t
, it is preferable not to explore new arms and to play
the best known arm, with mean reward ?, until time n.
q A fortiori, the best known arm is played
until time n whenever its parameter is larger than 1 ? n2 . We denote by An the first arm whose
q
parameter is larger than 1 ? n2 . We have Kn ? An (the optimal policy cannot explore more than
An arms) and
r
n
.
E(An ) =
2
q
The parameter ?An of arm An is uniformly distributed over [1 ? n2 , 1], so that
r
1
E(?An ) = 1 ?
.
(3)
2n
For all k = 1, 2, . . ., let L1 (k) be the length of the first run of arm k. We have:
r
E(L1 (1)+. . .+L1 (An ?1)) = (E(An )?1)E(L1 (1)|?1 ? 1?
q
r
? ln( n2 )
2
n
q , (4)
)=(
?1)
n
2
1? 2
n
using the fact that:
r
E(L1 (1)|?1 ? 1 ?
2
)=
n
Z
1?
?2
n
0
1
du
q .
1?u1? 2
n
In particular,
lim
n?+?
and
1
E(L1 (1) + . . . + L1 (An ? 1)) ? 0
n
(5)
4
1
P (L1 (1) + . . . + L1 (An ? 1) ? n 5 ) ? 1.
n?+? n
lim
To conclude, we write:
E(Rn ) ? E(Kn ) + E((n ? L1 (1) ? . . . ? L1 (An ? 1)))(1 ? ?An )).
4
Observe that, on the event {L1 (1) + . . . + L1 (An ? 1) ? n 5 }, the number of explored
q arms satisfies
2
0
0
Kn ? An where An denotes the first arm whose parameter is larger than 1 ?
4 . Since
n?n 5
q
4
4
5
P (L1 (1) + . . . + L1 (An ? 1) ? n 5 ) ? 1 and E(A0n ) = n?n
2 , we deduce that:
lim inf
n?+?
E(Kn )
1
?
?? .
n
2
5
By the independence of ?An and L1 (1), . . . , L1 (An ? 1),
1
? E((n ? L1 (1) ? . . . ? L1 (An ? 1)))(1 ? ?An ))
n
1
= ? (n ? E(L1 (1) + . . . + L1 (An ? 1)))(1 ? E(?An )),
n
which tends to
4
4.1
?1
2
in view of (3) and (5). The announced bound follows.
Unknown time horizon
Anytime version of the algorithm
When the time horizon is unknown, the targets depend on the current time t, say `1 (t) and `2 (t).
Now any arm that is exploited may be eventually discarded, in the sense that a new arm is explored.
This happens whenever either L1 < `1 (t) or L2 < `2 (t), where L1 and L2 are the respective lengths
of the first run and the first m runs of this arm. Thus, unlike the previous version of the algorithm
which consists in an exploration phase followed by an exploitation phase, the anytime version of the
algorithm continuously switches between exploration
? and exploitation.
? We prove in Proposition 2
below that, for large m, the target ?
values `1 (t) = b 3 tc and `2 (t) = bm tc given in the pseudo-code
achieve an asymptotic regret in 2 n.
Algorithm 2: Two-target algorithm with unknown time horizon.
Parameter: m
Function:
Explore
I ? I + 1, L ? 0, M ? 0
Algorithm:
I?0
Explore
Exploit ? false
forall the t = 1, 2, . . . do
Get reward
? I
? X from arm
`1 = b 3 tc, `2 = bm tc
if Exploit then
if L1 < `1 or L2 < `2 then
Explore
Exploit ? false
else
if X = 1 then
L?L+1
else
M ?M +1
if M = 1 then
if L < `1 then
Explore
else
L1 ? L
else if M = m then
if L < `2 then
Explore
else
L2 ? L
Exploit? true
6
4.2
Regret analysis
?
Proposition
2 The two-target algorithm with time-dependent targets `1 (t) = b 3 tc and `2 (t) =
?
bm tc satisfies:
E(Rn )
1
lim sup ?
?2+ .
m
n
n?+?
Proof. For all k = 1, 2, . . ., denote by L1 (k) and L2 (k) the respective lengths of the first run and
of the first m runs of arm k when this arm is played continuously. Since arm k cannot be selected
before time k, the regret at time n satisfies:
Rn ? K n + m
Kn
X
1{L1 (k)>`1 (k)} +
n
X
(1 ? Xt )1{L2 (It )>`2 (t)} .
t=1
k=1
First observe that, since the target functions `1 (t) and `2 (t) are non-decreasing, Kn is less than or
equal to Kn0 , the number of arms selected by a two-target policy with known time horizon n and
0
fixed targets `1 (n) and `2 (n). In this scheme, let U10 = 1 if arm 1 is used until time n and
?U1 = 0
1
0
0
?
otherwise. It then follows from (2) that P (U1 = 1) ? n and E(Kn ) ? E(Kn ) ? n when
n ? +?.
Now,
Kn
X
E
!
1{L1 (k)>`1 (k)}
=
k=1
=
?
X
k=1
?
X
P (L1 (k) > `1 (k), Kn ? k),
P (L1 (k) > `1 (k))P (Kn ? k|L1 (k) > `1 (k)),
k=1
?
?
X
E(Kn )
P (L1 (k) > `1 (k))P (Kn ? k) ?
X
P (L1 (k) > `1 (k)),
k=1
k=1
where the first inequality follows from the fact that for any arm k and all u ? [0, 1],
P (?k ? u|L1 (k) > `1 (k)) ? P (?k ? u) and P (Kn ? k|?k ? u) ? P (Kn ? k),
and the second inequality follows from the fact that the random variables L1 (1), L1 (2),
? . . . are
i.i.d. and the sequence `1 (1), `1 (2), . . . is non-decreasing. Since E(Kn ) ? E(Kn0 ) ? n when
1
n ? +? and P (L1 (k) > `1 (k)) ? ?
when k ? +?, we deduce:
3
k
!
Kn
X
1
1{L1 (k)>`1 (k)} = 0.
lim ? E
n?+?
n
k=1
Finally,
E((1 ? Xt )1{L2 (It )>`2 (t)} ) ? E(1 ? Xt |L2 (It ) > `2 (t)) ?
m+1 1
?
m 2 t
when t ? +?,
so that
r
n
n
m+1
1X1 n
1 X
E((1 ? Xt )1{L2 (It )>`2 (t)} ) ?
lim
,
lim sup ?
m n?+? n t=1 2 t
n t=1
n?+?
Z
m+1 1 1
m+1
? du =
=
.
m
m
0 2 u
Combining the previous results yields:
lim sup
n?+?
E(Rn )
1
?
?2+ .
m
n
7
4.3
Lower bound
?
We believe that if E(Rn )/ n tends to some limit, then this limit is at least 2. To support this
conjecture, consider an oracle that reveals the parameter of each arm after the first failure of this arm,
as in the proof of Theorem 1. With this information, an optimal policy exploits an arm whenever its
1
parameter is larger than some increasing function ??t of time t. Assume that 1 ? ??t ? c?
for some
t
c > 0 when t ? +?. Then proceeding as in the proof of Theorem 1, we get:
r
Z
n
1X 1 n
1
E(Rn )
1 1 du
? = c + ? 2.
? c + lim
lim inf ?
=c+
n?+? n
n?+?
2c t
c 0 2 u
c
n
t=1
5
Numerical results
Figure 1 gives the expected failure rate E(Rn )/n with respect to the time horizon n, that is supposed
to be known. The results are derived from the simulation of 105 independent samples and shown
with 95% confidence intervals. The mean rewards have (a) a uniform distribution or (b) a Beta(1,2)
distribution, corresponding to the probability density function u 7? 2(1 ? u). The single-target
algorithm corresponds
to the run policy of Berry et. al. [2] with the asymptotically optimal target
?
?
values n and 3 2n, respectively. For the two-target algorithm, we take m = 3 and the target
values given in Proposition 1 and Proposition
3 (inpthe appendix). The results are compared with
p
the respective asymptotic lower bounds 2/n and 3 3/n. The performance gains of the two-target
algorithm turn out to be negligible for the uniform distribution but substantial for the Beta(1,2)
distribution, where ?good? arms are less frequent.
0.6
Asymptotic lower bound
Single-target algorithm
Two-target algorithm
0.4
Expected failure rate
Expected failure rate
0.5
0.3
0.2
0.1
Asymptotic lower bound
Single-target algorithm
Two-target algorithm
0.5
0.4
0.3
0.2
0.1
0
0
10
100
1000
10000
10
Time horizon
100
1000
10000
Time horizon
(a) Uniform mean-reward distribution
(b) Beta(1,2) mean-reward distribution
Figure 1: Expected failure rate E(Rn )/n with respect to the time horizon n.
6
Conclusion
The proposed algorithm uses two levels of sampling in the exploration phase: the first eliminates
?bad? arms while the second selects
arms. To our knowledge, this is the first algorithm that
? ?good?
?
achieves the optimal regrets in 2n and 2 n for known and unknown horizon times, respectively.
Future work will be devoted to the proof of the lower bound in the case of unknown horizon time. We
also plan to study various extensions of the present work, including mean-reward distributions whose
support does not contain 1 and distribution-free algorithms. Finally, we would like to compare the
performance of our algorithm for finite-armed bandits with those of the best known algorithms like
KL-UCB [10, 3] and Thompson sampling [14, 8] over short time horizons where the full exploration
of the arms is generally not optimal.
Acknowledgments
The authors acknowledge the support of the European Research Council, of the French ANR (GAP
project), of the Swedish Research Council and of the Swedish SSF.
8
References
[1] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Mach. Learn., 47(2-3):235?256, May 2002.
[2] Donald A. Berry, Robert W. Chen, Alan Zame, David C. Heath, and Larry A. Shepp. Bandit
problems with infinitely many arms. Annals of Statistics, 25(5):2103?2116, 1997.
[3] Olivier Capp?e, Aur?elien Garivier, Odalric-Ambrym Maillard, R?emi Munos, and Gilles Stoltz.
Kullback-leibler upper confidence bounds for optimal sequential allocation. To appear in Annals of Statistics, 2013.
[4] Kung-Yu Chen and Chien-Tai Lin. A note on strategies for bandit problems with infinitely
many arms. Metrika, 59(2):193?203, 2004.
[5] Kung-Yu Chen and Chien-Tai Lin. A note on infinite-armed bernoulli bandit problems with
generalized beta prior distributions. Statistical Papers, 46(1):129?140, 2005.
[6] Stephen J Herschkorn, Erol Pekoez, and Sheldon M Ross. Policies without memory for the
infinite-armed bernoulli bandit under the average-reward criterion. Probability in the Engineering and Informational Sciences, 10:21?28, 1996.
[7] Ying-Chao Hung. Optimal bayesian strategies for the infinite-armed bernoulli bandit. Journal
of Statistical Planning and Inference, 142(1):86?94, 2012.
[8] Emilie Kaufmann, Nathaniel Korda, and R?emi Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In Algorithmic Learning Theory, pages 199?213. Springer,
2012.
[9] Tze L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances
in Applied Mathematics, 6(1):4?22, 1985.
[10] Tze Leung Lai. Adaptive treatment allocation and the multi-armed bandit problem. The Annals
of Statistics, pages 1091?1114, 1987.
[11] Chien-Tai Lin and CJ Shiau. Some optimal strategies for bandit problems with beta prior
distributions. Annals of the Institute of Statistical Mathematics, 52(2):397?405, 2000.
[12] C.L Mallows and Herbert Robbins. Some problems of optimal sampling strategy. Journal of
Mathematical Analysis and Applications, 8(1):90 ? 103, 1964.
[13] Olivier Teytaud, Sylvain Gelly, and Mich`ele Sebag. Anytime many-armed bandits. In CAP07,
2007.
[14] W. R. Thompson. On the Likelihood that one Unknown Probability Exceeds Another in View
of the Evidence of Two Samples. Biometrika, 25:285?294, 1933.
[15] Yizao Wang, Jean-Yves Audibert, and R?emi Munos. Algorithms for infinitely many-armed
bandits. In NIPS, 2008.
9
| 5109 |@word exploitation:3 version:6 simulation:1 initial:1 contains:4 selecting:1 current:3 nt:1 rocquencourt:1 numerical:2 remove:1 selected:8 metrika:1 item:1 short:3 provides:1 successive:2 preference:1 teytaud:2 mathematical:1 beta:7 incorrect:2 prove:3 consists:3 introduce:1 expected:7 themselves:1 planning:1 multi:3 informational:1 decreasing:2 armed:16 increasing:1 project:1 moreover:2 bounded:1 pseudo:2 preferable:1 biometrika:1 control:1 appear:1 before:1 negligible:1 engineering:1 tends:3 limit:2 mach:1 opti:1 inria:1 limited:1 practical:2 acknowledgment:1 mallow:2 practice:1 regret:28 pre:1 confidence:2 donald:1 get:3 cannot:2 close:3 selection:3 catalogue:1 www:1 equivalent:4 independently:1 thompson:3 m2:3 rule:3 insight:1 pek:1 target:35 play:2 annals:4 user:2 olivier:2 us:1 expensive:1 observed:1 ft:2 wang:2 calculate:1 mal:1 mentioned:1 substantial:1 reward:39 depend:1 capp:1 isp:1 represented:1 various:3 whose:7 jean:1 larger:5 say:1 otherwise:3 anr:1 statistic:3 fischer:1 sequence:3 propose:4 coming:1 fr:2 frequent:1 combining:1 achieve:2 oz:1 supposed:1 exploiting:1 shiau:1 spent:1 depending:1 received:2 variables1:1 implies:1 convention:1 stochastic:3 exploration:10 enable:1 larry:1 proposition:6 stockholm:1 strictly:1 extension:2 exploring:2 around:1 considered:3 algorithmic:1 achieves:9 consecutive:1 yizao:1 ross:2 council:2 robbins:3 ere:2 aim:2 forall:2 derived:1 focus:1 bernoulli:11 likelihood:1 sense:2 inference:1 dependent:1 stopping:2 leung:1 bandit:21 france:3 interested:1 i1:3 provably:1 selects:1 issue:1 plan:1 field:1 once:1 equal:2 having:1 sampling:4 yu:2 future:1 phase:6 recalling:1 yielding:1 devoted:1 kt:2 respective:3 sweden:1 stoltz:1 korda:1 instance:3 cost:1 values3:1 uniform:7 too:1 kn:17 density:1 explores:1 aur:1 continuously:2 cesa:1 possibly:1 elien:1 audibert:2 depends:4 view:2 sup:4 reached:1 option:4 minimize:2 yves:1 nathaniel:1 kaufmann:1 who:1 characteristic:1 yield:1 bayesian:1 networking:1 emilie:1 whenever:4 failure:18 proof:10 sampled:1 gain:2 proved:1 treatment:1 ele:1 anytime:6 lim:12 knowledge:1 maillard:1 cj:1 actually:1 auer:1 alexandre:2 swedish:2 generality:1 until:17 french:1 indicated:1 believe:2 name:1 contain:1 true:2 hence:1 leibler:1 criterion:1 generalized:1 l1:38 novel:1 overview:1 extend:1 m1:4 refer:1 multiarmed:1 automatic:1 mathematics:2 similarly:1 deduce:4 fortiori:1 inf:3 inequality:3 arbitrarily:3 success:5 exploited:5 herbert:2 additional:1 surely:1 maximize:1 stephen:1 full:2 exceeds:2 alan:1 long:5 lin:3 lai:2 variant:2 expectation:1 achieved:2 receive:1 addressed:1 interval:1 else:8 eliminates:1 unlike:2 zame:1 heath:1 member:1 invalidates:1 ssf:1 switch:1 independence:1 idea:1 whether:1 peter:1 useful:1 generally:1 se:1 involve:1 category:1 write:1 achieving:1 garivier:1 asymptotically:4 run:13 almost:1 decision:1 appendix:4 announced:1 bound:11 followed:2 played:4 oracle:2 constraint:1 sheldon:1 u1:24 emi:3 argument:1 optimality:1 relatively:1 conjecture:5 department:2 slightly:1 happens:1 taken:1 ln:1 remains:1 tai:3 turn:1 eventually:1 observe:2 appropriate:1 thomas:2 denotes:2 exploit:9 giving:1 gelly:2 classical:2 objective:2 quantity:1 strategy:5 usual:2 kth:2 odalric:1 code:2 length:5 ratio:2 minimizing:1 ying:1 equivalently:1 robert:1 statement:2 design:1 affiliated:1 policy:10 unknown:11 bianchi:1 upper:1 gilles:1 discarded:2 finite:7 acknowledge:1 situation:3 extended:2 precise:1 rn:20 david:1 namely:2 paris:3 kl:1 prouti:2 nip:1 shepp:1 usually:1 below:3 including:2 memory:1 critical:1 event:1 rely:1 arm:81 scheme:1 improve:1 ne:2 chao:1 prior:3 berry:5 l2:9 dislike:1 nicol:1 asymptotic:4 loss:1 allocation:3 bonald:2 last:1 free:1 pulled:1 ambrym:1 institute:1 munos:4 distributed:4 feedback:1 cumulative:1 author:2 adaptive:2 bm:6 kullback:1 chien:3 a0n:1 sequentially:1 reveals:2 assumed:1 conclude:1 mich:1 nature:1 learn:1 du:6 european:1 main:1 motivation:1 alepro:1 paul:1 n2:10 sebag:2 x1:2 kn0:2 telecom:2 referred:1 representative:1 explicit:1 advertisement:1 theorem:6 erroneous:1 bad:2 xt:8 discarding:1 explored:7 evidence:1 intrinsic:1 consist:1 false:4 sequential:1 horizon:28 gap:1 chen:3 tc:6 logarithmic:2 tze:2 explore:14 infinitely:3 springer:1 corresponds:1 satisfies:4 relies:1 content:2 paristech:2 infinite:11 specifically:5 uniformly:4 sylvain:1 total:3 called:1 player:1 ucb:3 select:1 formally:1 support:7 kung:2 hung:1 |
4,542 | 511 | Neural Network Analysis of Event Related
Potentials and Electroencephalogram Predicts
Vigilance
Rita Venturini
William W. Lytton
Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute
La J oBa, CA 92037
Abstract
Automated monitoring of vigilance in attention intensive tasks such as
air traffic control or sonar operation is highly desirable. As the operator monitors the instrument, the instrument would monitor the operator,
insuring against lapses. We have taken a first step toward this goal by using feedforward neural networks trained with backpropagation to interpret
event related potentials (ERPs) and electroencephalogram (EEG) associated with periods of high and low vigilance . The accuracy of our system on
an ERP data set averaged over 28 minutes was 96%, better than the 83%
accuracy obtained using linear discriminant analysis. Practical vigilance
monitoring will require prediction over shorter time periods. We were able
to average the ERP over as little as 2 minutes and still get 90% correct
prediction of a vigilance measure. Additionally, we achieved similarly good
performance using segments of EEG power spectrum as short as 56 sec.
1
INTRODUCTION
Many tasks in society demand sustained attention to minimally varying stimuli
over a long period of time. Detection of failure in vigilance during such tasks would
be of enormous value. Different physiological variables like electroencephalogram
651
652
Venturini, Lytton, and Sejnowski
(EEG), electro-oculogram (EOG), heart rate, and pulse correlate to some extent
with the level of attention (1, 2, 3). Profound changes in the appearance and
spectrum of the EEG with sleep and drowsiness are well known. However, there is
no agreement as to which EEG bands can best predict changes in vigilance. Recent
studies (4) seem to indicate that there is a strong correlation between several EEG
power spectra frequencies changes and attentional level in subjects performing a
sustained task. Another measure that has been widely assessed in this context
involves the use of event-related potentials (ERP)(5). These are voltage changes
in the ongoing EEG that are time locked to sensory, motor, or cognitive events.
They are usually too small to be recognized in the background electrical activity.
The ERP's signal is typically extracted from the background noise of the EEG as
a consequence of averaging over many trials. The ERP waveform remains constant
for each repetition of the event, whereas the background EEG activity has random
amplitude. Late cognitive event-related potentials, like the P300, are well known to
be related to attentional allocation (6, 7,8). Unfortunately, these ERPs are evoked
only when the subject is attending to a stimulus. This condition is not present
in a monitoring situation where monitoring is done precisely because the time of
stimulus occurrence is unknown. Instead, shorter latency responses, evoked from
unobtrusive task-irrelevant signals, need to be evaluated.
Data from a sonar simulation task was obtained from S.Makeig at al (9). They
presented auditory targets only slightly louder than background noise to 13 male
United States Navy personnel. Other tones, which the subjects were instructed
to ignore, appeared randomly every 2-4 seconds (task irrelevant probes). Background EEG and ERP were both collected and analyzed. The ERPs evoked by
the task irrelevant probes were classified into two groups depending on whether
they appeared before a correctly identified target (pre-hit ERPs) or a missed target
(pre-lapse ERPs). Pre-lapse ERPs showed a relative increase of P2 and N2 components and a decrease of the N 1 deflection. N 1, N2 and P2 designate the sign and
time of peak of components in the ERP. Prior linear discriminant analysis (LDA)
performed on the averages of each session, showed 83% correct classification using
ERPs obtained from a single scalp site. Thus, the pre-hit and pre-lapse ERPs differed enough to permit classification by averaging over a large enough sample. In
addition, EEG power spectra over 81 frequency bands were computed. EEG classification was made on the basis of a continuous measure of performance, the error
rate, calculated as the mean of hits and lapses in a 32 sec moving window. Analysis of the EEG power spectrum (9) revealed that significant coherence is observed
between various EEG frequencies and performance.
2
2.1
METHOD
THE DATA SET
Two different groups of input data were used (ERPs and EEG). For the former, a 600
msec sample of task irrelevant probe ERP was reduced to 40 points after low-pass
filtering. We normalized the data on the basis of the maximum and minimum values
of the entire set, maintaining amplitude variability. A single ERP was classified as
being pre-hit or pre-lapse based on the subject's performance on the next target
tone. EEG power spectrum, obtained every 1.6 seconds, was used as an input to
Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance
predict a continuous estimate of vigilance (error rate), obtained by averaging the
subject's performance during a 32 second window (normalized between -1 and 1).
The five frequencies used (3, 10, 13, 19 and 39 Hz) had previously shown to be
most strongly related to error rate changes (9). Each frequency was individually
normalized to range between -1 and 1.
2.2
THE NETWORK
Feedforward networks were trained with backpropagation. We compared two-layer
network to three-layer networks, varying the number of hidden units in different
simulations between 2 and 8. Each architecture was trained ten times on the same
task, resetting the weights every time with a different random seed. Initial simulations were performed to select network parameter values. We used a learning rate of
0.3 divided by the fan-in and weight initialization in a range between ?0.3. For the
ERP data we used a jackknife procedure. For each simulation, a single pattern was
excluded from the training set and considered to be the test pattern. Each pattern
in turn was removed and used as the test pattern while the others are used for
training. The EEG data set was not as limited as the ERP one and the simulations
were performed using half of the data as training and the remaining part as testing
set . Therefore, for subjects that had two runs each, the training and testing data
came from separate sessions.
3
3.1
RESULTS
ERPs
The first simulation was done using a two-layer network to assess the adequacy
of the neural network approach relative to the previous LDA results. The data
set consisted of the grand averages of pre-hits and pre-lapses, from a single scalp
site (Cz) of 9 subjects, three of them with a double session , giving a total of 24
patterns. The jackknife procedure was done in two different ways. First each ERP
was considered individually, as had been done in the LDA study (pattern-jackknife) .
Second all the ERPs of a single subject were grouped together and removed together
to form the test set (subject-jackknife). The network was trained for 10,000 epochs
before testing. Figure 1 shows the weights for the 24 networks each trained with
a set of ERPs obtained by removing a single ERP. The "waveform" of the weight
values corresponds to features common to the pre-hit ERPs and to the negative of
features common to the pre-lapse ERPs . Classification of patterns by the network
was considerably more accurate than the 83% correct that had been obtained with
the previous LDA analysis. 96% correct evaluation was seen in seven of the ten
networks started with different random weight selections . The remaining three
networks produced 92% correct responses (Fig. 2). The same two patterns were
missed in all cases. Using hidden units did not improve generalization. The subjectjackknife results were very similar: 96% correct in two of ten networks and 92% in
the remaining eight (Fig. 2). Thus, there was a somewhat increased difficulty
in generalizing across individuals. The ability of the network to generalize over a
shorter period of time was tested by progressively decreasing the number of trials
used for testing using a network trained on the grand average ERPs . Subaverages
653
654
Venturini, Lytton, and Sejnowski
0.50
0.0
-0.50
~0~------------~3~0~0------------~600
rns
Figure 1: Weights from 24 two-layer networks trained from different initial weights:
each value correspond to a sample point in time in the input data.
%CORRECf
CLASSIFICATION
%CORRECf
CLASSIFICATION
100
100
90
90
80
80
70
70
60
60
50
50
o
246
HIDDEN UNITS
8
o
246
8
HIDDEN UNITS
Figure 2: Generalization performance in Pattern (left) and Subject (right) Jackknifes, using two-layer and three-layer networks with different number of hidden
units. Each bar represents a different random start of the network.
Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance
% CORRECT
CLASSIFICA TION
100 -
50 -
0I
I
I
I
I
I
1
32
64
96
128
160
TOTAL NUMBER
Figure 3: Generalization for testing subaverages made using varymg number of
individual ERPs
were formed using from 1 to 160 individual ERPs (Figure 3) . Performance with a
single ERP is at chance. With 16 ERPs , corresponding to about 2 minutes, 90%
accuracy was obtained.
3.2
EEG
We first report results using a two-layer network to compare with the previous LDA
analysis. Five power spectrum frequency bands from a single scalp site (Cz) were
used as input data. The error rate was averaged over 32 seconds at 1.6 second
intervals. In the first set of runs both error rate and power spectra were filtered
using a two minute time window. Good results could be obtained in cases where a
subject made errors more than 40% of the time (Fig. 4). When the subject made
few errors, training was more difficult and generalization was poor. These results
were virtually identical to the LDA ones. The lack of improvement is probably due
to the fact that the LDA performance was already close to 90% on this data set.
Use of three-layer networks did not improve the generalization performance.
The use of a running average includes information in the EEG after the time at
which the network is making a prediction. Causal prediction was attempted using
multiple power spectra taken at 1.6 sec intervals over the past 56 sec, to predict
the upcoming error rate. The results for one subject are shown in Figure 5. The
predicted error rate differs from the target with a root mean square error of 0.3.
655
656
Venturini, Lytton, and Sejnowski
1.0 -
E
R
R
o
R
r
:1.
R
A
T
E
0.5
.
.
j
!
- t\\"
~
!
li
.,
0.0 _ I
0.0
I
I
10.0
5.0
I
I
I
15.0
20.0
25.0
TIME (min.)
Figure 4: Generalization results predicting error rate from EEG. The dotted line is
the network output, solid line the desired value.
-I
E
R
R
0
R
R
A
T
E
1.0-
0.5-
0.0-
-0.51
1
1
1
1
1
0.0
5.0
10.0
15.0
20.0
25.0
TIME (min.)
Figure 5: Causal prediction of error rate from EEG. The dotted line is the network
output, solid line the desired value.
Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance
0.4 0.3 0.2 -
0.1 0.0 -
-0.1 -0.2 -
3.05 Hz
9.15 Hz
13.4 Hz
19.5 Hz
39.0 Hz
Figure 6: Weights from a two-layer causal prediction network. Each bar, within
each frequency band, represents the influence on the output unit of power in that
band at previous times ranging from 1 sec (right bar) to 56 sec (left bar).
Figure 6 shows the weights from a two-layer network trained to predict instantaneous error rate. The network mostly uses information from the 3.05 Hz and 13.4
Hz frequency bands in predicting the error rate changes. The values of the 3.05
Hz weights have a strong peak from the most recent time steps, indicating that
power in this frequency band predicts the state of vigilance on a short time scale.
The alternating positive and negative weights present in the 13.4 Hz set suggest
that rapid changes in power in this band might be predictive of vigilance (i.e. the
derivative of the power signal).
4
DISCUSSION
These results indicate that neural networks could be useful in analyzing electrophysiological measures. The EEG results suggest that the analysis can be applied
to detect fluctuations of the attentional level of the subjects in real time. EEG
analysis could also be a useful tool for understanding changes that occur in the
electric activity of the brain during different states of attention.
In the ERP analysis, the lack of improvement with the introduction of hidden units
might be due to the small size of the data set. If the data set is too small, adding
hidden units and connections may reduce the ability to find a general solution to
the problem. The ERP subject-jackknife results point out that inter-subject generalization is possible. This suggests the possibility of preparing a pre-programmed
network that could be used with multiple subjects rather than training the network
for each individual. The subaverages results suggest that the detection is possible
657
658
Venturini, Lytton, and Sejnowski
in a relatively brief time interval. ERPs could be an useful completion to the EEG
analysis in order to obtain an on line detector of attentional changes.
Future research will combination of these two measures along with EOG and heart
rate. The idea is to let the model choose different network architectures and parameters, depending on the specific subtask.
ACKNOWLEDGEMENTS
We would like to thank Scott Makeig and Mark Inlow, Cognitive Performance and
Psychophysiology Department, Naval Health Research Center, San Diego for providing the data and for invaluable discussions and Y. Le Cun and L.Y. Bottou from
Neuristique who provided the SN2 simulator. RV was supported by Ministry of
Public Instruction, Italy; WWL from a Physician Scientist Award, National Institute of Aging; TJS is an Investigator with the Howard Hughes Medical Institute.
Research was supported by ONR Grant N00014-91-J-1674.
REFERENCES
1 Belyavin, A. and Wright, N .A.( 1987). Changes in electrical activity of the brain
with vigilance. Electroencephalography and Clinical Neuroscience, 66:137-144.
2 Torsvall, L. and Akerstedt, T.(1988). Extreme sleepiness: quantification of OG
and spectral EEG parameters. Int. 1. Neuroscience, 38:435-44l.
3 Fruhstorfer, H., Langanke, P., Meinzer, K., Peter, J.H., and Pfaff, U.(1977).
Neurophysiological vigilance indicators and operational analysis of a train vigilance
monitoring device: a laboratory and field study. In R.R.Mackie(Ed.), Vigilance
Theory, Operational Performance, and Physiological Correlates, 147-162, New York:
Plenum Press.
4 Makeig, S. and Inlow M.(1991). Lapses in Alertness: Coherence of fluctuations
in performance and EEG spectrum. Cognitive Performance and Psychophysiology
Department, NHRC, San Diego. Technical Report.
5 Fruhstorfer, H. and Bergstrom, R.M.(1969). Human vigilance and auditory evoked
responses. Electroencephalography and Clinical Neurophysiology, 27:346-355.
6 Polich, J.(1989).
17:19-28.
Habituation of P300 from auditory stimuli.
Psychobiology,
7 Polich, J .(1987). Task difficulty, probability, and inter-stimulus interval as determinants of P300 from auditory stimuli. Electroencephalography and clinical Neurophysiology, 68:311-320.
8 Polich, J.( 1990). P300, Probability, and Interstimulus Interval. Psychophysiology,
27:396-403.
9 Makeig S., Elliot F.S., Inlow M. and Kobus D.A.(1991) Predicting Lapses in
Vigilance Using Brain Evoked Responses to Irrelevant Auditory Probe. Cognitive
Performance and Psychophysiology Department, NHRC, San Diego. Technical Report.
| 511 |@word neurophysiology:2 trial:2 determinant:1 instruction:1 pulse:1 simulation:6 solid:2 initial:2 united:1 past:1 motor:1 progressively:1 half:1 device:1 tone:2 short:2 filtered:1 five:2 along:1 profound:1 sustained:2 inter:2 rapid:1 simulator:1 brain:3 decreasing:1 little:1 window:3 electroencephalography:3 provided:1 every:3 neuristique:1 makeig:4 hit:6 control:1 unit:8 medical:1 grant:1 before:2 positive:1 scientist:1 aging:1 consequence:1 analyzing:1 fluctuation:2 erps:19 might:2 bergstrom:1 minimally:1 initialization:1 evoked:5 suggests:1 limited:1 programmed:1 locked:1 range:2 averaged:2 practical:1 testing:5 hughes:1 differs:1 backpropagation:2 procedure:2 pre:12 suggest:3 get:1 close:1 selection:1 operator:2 context:1 influence:1 center:1 attention:4 attending:1 plenum:1 target:5 diego:3 us:1 rita:1 agreement:1 predicts:5 observed:1 electrical:2 decrease:1 removed:2 alertness:1 subtask:1 trained:8 segment:1 predictive:1 basis:2 various:1 train:1 sejnowski:5 navy:1 widely:1 ability:2 p300:4 interstimulus:1 double:1 mackie:1 lytton:5 depending:2 completion:1 strong:2 p2:2 predicted:1 involves:1 indicate:2 polich:3 nhrc:2 waveform:2 correct:7 human:1 public:1 require:1 generalization:7 kobus:1 designate:1 considered:2 wright:1 seed:1 predict:4 individually:2 grouped:1 repetition:1 tool:1 rather:1 varying:2 voltage:1 og:1 naval:1 improvement:2 detect:1 entire:1 typically:1 hidden:7 unobtrusive:1 classification:6 field:1 identical:1 represents:2 preparing:1 future:1 others:1 stimulus:6 report:3 few:1 randomly:1 national:1 individual:4 william:1 detection:2 highly:1 possibility:1 evaluation:1 male:1 analyzed:1 extreme:1 accurate:1 shorter:3 desired:2 causal:3 increased:1 too:2 considerably:1 peak:2 grand:2 terrence:1 physician:1 together:2 choose:1 vigilance:19 cognitive:5 derivative:1 li:1 potential:7 sec:6 includes:1 int:1 performed:3 tion:1 root:1 traffic:1 start:1 ass:1 air:1 formed:1 accuracy:3 square:1 who:1 resetting:1 correspond:1 generalize:1 produced:1 monitoring:5 psychobiology:1 classified:2 detector:1 ed:1 against:1 failure:1 frequency:9 associated:1 auditory:5 electrophysiological:1 amplitude:2 classifica:1 psychophysiology:4 response:4 done:4 evaluated:1 strongly:1 correlation:1 lack:2 lda:7 normalized:3 consisted:1 former:1 excluded:1 alternating:1 laboratory:2 lapse:10 elliot:1 during:3 electroencephalogram:6 invaluable:1 ranging:1 instantaneous:1 common:2 interpret:1 significant:1 louder:1 ongoing:1 similarly:1 session:3 had:4 moving:1 recent:2 showed:2 italy:1 irrelevant:5 n00014:1 onr:1 came:1 seen:1 minimum:1 ministry:1 somewhat:1 recognized:1 period:4 signal:3 rv:1 multiple:2 desirable:1 technical:2 clinical:3 long:1 divided:1 award:1 drowsiness:1 prediction:6 inlow:3 cz:2 achieved:1 background:5 whereas:1 addition:1 interval:5 probably:1 subject:17 hz:10 virtually:1 electro:1 seem:1 adequacy:1 habituation:1 revealed:1 feedforward:2 enough:2 automated:1 architecture:2 identified:1 reduce:1 idea:1 pfaff:1 intensive:1 whether:1 peter:1 york:1 useful:3 latency:1 sn2:1 band:8 ten:3 reduced:1 dotted:2 sign:1 neuroscience:2 correctly:1 group:2 enormous:1 monitor:2 erp:16 deflection:1 run:2 tjs:1 missed:2 coherence:2 layer:10 fan:1 sleep:1 scalp:3 activity:4 occur:1 precisely:1 min:2 performing:1 relatively:1 jackknife:6 department:3 combination:1 poor:1 across:1 slightly:1 cun:1 making:1 taken:2 heart:2 remains:1 previously:1 turn:1 instrument:2 operation:1 permit:1 probe:4 eight:1 spectral:1 occurrence:1 remaining:3 running:1 maintaining:1 giving:1 society:1 upcoming:1 already:1 attentional:4 separate:1 thank:1 seven:1 extent:1 discriminant:2 collected:1 toward:1 providing:1 difficult:1 unfortunately:1 mostly:1 negative:2 unknown:1 howard:1 situation:1 neurobiology:1 variability:1 rn:1 connection:1 able:1 bar:4 usually:1 pattern:9 scott:1 appeared:2 power:12 event:9 difficulty:2 quantification:1 predicting:3 indicator:1 improve:2 brief:1 started:1 health:1 eog:2 prior:1 epoch:1 understanding:1 acknowledgement:1 relative:2 allocation:1 filtering:1 supported:2 institute:3 calculated:1 sensory:1 instructed:1 insuring:1 made:4 san:3 correlate:2 ignore:1 spectrum:10 continuous:2 sonar:2 additionally:1 ca:1 operational:2 eeg:26 bottou:1 electric:1 did:2 oba:1 noise:2 n2:2 site:3 fig:3 differed:1 salk:1 msec:1 late:1 minute:4 removing:1 specific:1 physiological:2 adding:1 demand:1 generalizing:1 appearance:1 neurophysiological:1 personnel:1 corresponds:1 chance:1 extracted:1 goal:1 change:10 averaging:3 total:2 pas:1 la:1 attempted:1 indicating:1 select:1 mark:1 assessed:1 investigator:1 tested:1 |
4,543 | 5,110 | Thompson Sampling for 1-Dimensional Exponential
Family Bandits
Emilie Kaufmann
Institut Mines-Telecom; Telecom ParisTech
[email protected]
Nathaniel Korda
INRIA Lille - Nord Europe, Team SequeL
[email protected]
Remi Munos INRIA Lille - Nord Europe, Team SequeL
[email protected]
Abstract
Thompson Sampling has been demonstrated in many complex bandit models,
however the theoretical guarantees available for the parametric multi-armed bandit
are still limited to the Bernoulli case. Here we extend them by proving asymptotic
optimality of the algorithm using the Jeffreys prior for 1-dimensional exponential
family bandits. Our proof builds on previous work, but also makes extensive use
of closed forms for Kullback-Leibler divergence and Fisher information (through
the Jeffreys prior) available in an exponential family. This allow us to give a finite
time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers
some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.
1
Introduction
K-armed bandit problems provide an elementary model for exploration-exploitation tradeoffs found
at the heart of many online learning problems. In such problems, an agent is presented with K
distributions (also called arms, or actions) {pa }K
a=1 , from which she draws samples interpreted as
rewards she wants to maximize. This objective induces a trade-off between choosing to sample a
distribution that has already yielded high rewards, and choosing to sample a relatively unexplored
distribution at the risk of loosing rewards in the short term. Here we make the assumption that
the distributions, pa , belong to a parametric family of distributions P = {p(? | ?), ? ? ?} where
? ? R. The bandit model is described by a parameter ?0 = (?1 , . . . , ?K ) such that pa = p(? | ?a ).
We introduce the mean function ?(?) = EX?p(?|?) [X], and the optimal arm ?? = ?a? where a? =
argmaxa ?(?a ).
An algorithm, A, for a K-armed bandit problem is a (possibly randomised) method for choosing
which arm at to sample from at time t, given a history of previous arm choices and obtained rewards,
Ht?1 := ((as , xs ))t?1
s=1 : each reward xs is drawn from the distribution pas . The agent?s goal is to
design an algorithm with low regret:
" t
#
X
?
R(A, t) = R(A, t)(?) := t?(? ) ? EA
xs .
s=1
This quantity measures the expected performance of algorithm A compared to the expected performance of an optimal algorithm given knowledge of the reward distributions, i.e. sampling always
from the distribution with the highest expectation.
1
Since the early 2000s the ?optimisim in the face of uncertainty? heuristic has been a popular approach to this problem, providing both simplicity of implementation and finite-time upper bounds
on the regret (e.g. [4, 7]). However in the last two years there has been renewed interest in the
Thompson Sampling heuristic (TS). While this heuristic was first put forward to solve bandit problems eighty years ago in [15], it was not until recently that theoretical analyses of its performance
were achieved [1, 2, 11, 13]. In this paper we take a major step towards generalising these analyses
to the same level of generality already achieved for ?optimistic? algorithms.
Thompson Sampling Unlike optimistic algorithms which are often based on confidence intervals,
the Thompson Sampling algorithm, denoted by A?0 uses Bayesian tools and puts a prior distribution
?a,0 = ?0 on each parameter ?a . A posterior distribution, ?a,t , is then maintained according to the
rewards observed in Ht?1 . At each time a sample ?a,t is drawn from each posterior ?a,t and then
the algorithm chooses to sample at = arg maxa?{1,...,K} {?(?a,t )}. Note that actions are sampled
according to their posterior probabilities of being optimal.
Our contributions TS has proved to have impressive empirical performances, very close to those
of state of the art algorithms such as DMED and KL-UCB [11, 9, 7]. Furthermore recent works
[11, 2] have shown that in the special case where each pa is a Bernoulli distribution B(?a ), TS using
a uniform prior over the arms is asymptotically optimal in the sense that it achieves the asymptotic
lower bound on the regret provided by Lai and Robbins in [12] (that holds for univariate parametric
bandits). As explained in [1, 2], Thompson Sampling with uniform prior for Bernoulli rewards
can be slightly adapted to deal with bounded rewards. However, there is no notion of asymptotic
optimality for this non-parametric family of rewards. In this paper, we extend the optimality property
that holds for Bernoulli distributions to more general families of parametric rewards, namely 1dimensional exponential families if the algorithm uses the Jeffreys prior:
Theorem 1. Suppose that the reward distributions belong to a 1-dimensional canonical exponential
family and let ?J denote the associated Jeffreys prior. Then,
K
R(A?J , T ) X ?(?a? ) ? ?(?a )
lim
=
,
T ??
ln T
K(?a , ?a? )
a=1
(1)
where K(?, ?0 ) := KL(p? , p0? ) is the Kullback-Leibler divergence between p? and p0? .
This theorem follows directly from Theorem 2. In the proof of this result we provide in Theorem
4 a finite-time, exponential concentration bound for posterior distributions of exponential family
random variables, something that to the best of our knowledge is new to the literature and of interest
in its own right. Our proof also exploits the connection between the Jeffreys prior, Fisher information
and the Kullback-Leibler divergence in exponential families.
Related Work Another line of recent work has focused on
p distribution-independent bounds for
Thompson Sampling. [2] establishes that R(A?U , T ) = O( KT ln(T )) for Thompson Sampling
for bounded rewards (with the classic uniform prior ?U on the underlying Bernoulli parameter). [14]
go beyond the Bernoulli model, and give an upper bound on the Bayes risk (i.e. the regret averaged
over the prior) independent of the prior distribution. For the parametric multi-armed bandit with K
arms described above, their result states that the regret of Thompson Sampling using a prior ?0 is
not too big when averaged over this same prior:
p
E????K [R(A?0 , T )(?)] ? 4 + K + 4 KT log(T ).
0
?
Building on the same ideas, [6] have improved this upper bound to 14 KT . In our paper, we rather
see the prior used by Thompson Sampling as a tool, and we want therefore to derive regret bounds
for any given problem parametrized by ? that depend on this parameter.
[14] also use Thompson Sampling in more general models, like the linear bandit model. Their result
is a bound on the Bayes risk that does not depend on the prior, whereas [3] gives a first bound on
the regret in this model. Linear bandits consider a possibly infinite number of arms whose mean
rewards are linearly related by a single, unknown coefficient vector. Once again, the analysis in
[3] encounters the problem of describing the concentration of posterior distributions. However by
using a conjugate normal prior, they can employ explicit concentration bounds available for Normal
distributions to complete their argument.
2
Paper Structure In Section 2 we describe important features of the one-dimensional canonical
exponential families we consider, including closed-form expression for KL-divergences and the
Jeffreys? prior. Section 3 gives statements of the main results, and provides the proof of the regret
bound. Section 4 proves the posterior concentration result used in the proof of the regret bound.
2
Exponential Families and the Jeffreys Prior
A distribution is said to belong to a one-dimensional canonical exponential family if it has a density
with respect to some reference measure ? of the form:
p(x | ?) = A(x) exp(T (x)? ? F (?)),
(2)
where ? ? ? ? RR. T and A are some fixed
functions that characterize the exponential family
and F (?) = log A(x) exp [T (x)?] d?(x) . ? is called the parameter space, T (x) the sufficient
statistic, and F (?) the normalisation function. We make the classic assumption that F is twice
differentiable with a continuous second derivative. It is well known [17] that:
EX|? (T (X)) = F 0 (?) and
VarX|? [T (X)] = F 00 (?)
showing in particular that F is strictly convex. The mean function ? is differentiable and stricly
increasing, since we can show that
?0 (?) = CovX|? (X, T (X)) > 0.
In particular, this shows that ? is one-to-one in ?.
KL-divergence in Exponential Families In an exponential family, a direct computation shows
that the Kullback-Leibler divergence can be expressed as a Bregman divergence of the normalisation
function, F:
K(?, ?0 ) = DFB (?0 , ?) := F (?0 ) ? [F (?) + F 0 (?)(?0 ? ?)] .
(3)
Jeffreys prior in Exponential Families In the Bayesian literature, a special ?non-informative?
prior, introduced by Jeffreys in [10], is sometimes considered. This prior, called the Jeffreys prior,
is invariant under re-parametrisation of the parameter space, and it can be shown to be proportional
to the square-root of the Fisher information I(?). In the special case of the canonical exponential
family, the Fisher information takes the form I(?) = F 00 (?), hence the Jeffreys prior for the model
(2) is
p
?J (?) ? |F 00 (?)|.
Under the Jeffreys prior, the posterior on ? after n observations is given by
!
n
X
p
p(?|y1 , . . . yn ) ? F 00 (?) exp ?
T (yi ) ? nF (?i )
(4)
i=1
R p
When ? F 00 (?)d? < +?, the prior is called proper. However, stasticians often use priors which
R p
are not proper: the prior is called improper if ? F 00 (?)d? = +? and any observation makes the
corresponding posterior (4) integrable.
Some Intuition for choosing the Jeffreys Prior In the proof of our concentration result for
posterior distributions (Theorem 4) it will be crucial to lower bound the prior probability of
an -sized KL-divergence ball around each of the parameters ?a . Since the Fisher information
0 2
00
F 00 (?) = lim?0 ?? K(?, ?0 )/|?
? ? ? | , choosing a prior proportional to F (?) ensures that the prior
measure of such balls are ?( ).
Examples and Pseudocode Algorithm 1 presents pseudocode for Thompson Sampling with the
Jeffreys prior for distributions parametrized by their natural parameter ?. But as the Jeffreys prior
is invariant under reparametrization, if a distribution
is parametrised by some parameter ? 6? ?,
p
the algorithm can use the Jeffreys prior ? I(?) on ?, drawing samples from the posterior on ?.
Note that the posterior sampling step (in bold) is always tractable using, for example, a HastingsMetropolis algorithm.
3
Algorithm 1 Thompson Sampling for Exponential Families with the Jeffreys prior
Require: F normalization function, T sufficient statistic, ? mean function
for t = 1 . . . K do
Sample arm t and get rewards xt
Nt = 1, St = T (xt ).
end for
for t = K + 1 . . . n do
for a = 1 . . . K do
p
Sample ?a,t from ?a,t ? F 00 (?) exp (?Sa ? Na F (?))
end for
Sample arm At = argmaxa ?(?a,t ) and get reward xt
SAt = SAt + T (xt ) NAt = NAt + 1
end for
Name
B(?)
N (?, ? 2 )
?(k, ?)
P(?)
Distribution
x
1?x
? (1 ? ?)
?0,1
log
(x??)2
? 2?2
? 1
e
2?? 2
k?1 ??x
?k
?(k) x
e
x ??
? e
x!
?x?
?
1??
?
?2
??
1[0,+?[ (x)
?N (x)
m
Pareto(xm , ?)
1
(x)
x?+1 [xm ,+?[
k
Weibull(k, ?) k?(x?)k?1 e?(?x) 1[0,+?[
?
log(?)
?? ? 1
??k
Prior on ?
Posterior on ?
1 1
Beta 2 , 2
Beta 12 + s, 21 + n ? s
2
?1
N ns , ?n
1
?
? ?1?
? ?1
? ?1k
?
?(kn, s)
?
1
2
+ s, n
? (n + 1, s ? n log xm )
??(n?1)k exp(??k s)
Figure 1: The posterior distribution after observations y1 , . . . , yn depends on n and s =
Pn
i=1
T (yi )
Some examples of common exponential family models are given in Figure 1, together with the
posterior distributions on the parameter ? that is used by TS with the Jeffreys prior. In addition to
examples already studied in [7] for which T (x) = x, we also give two examples of more general
canonical exponential families, namely the Pareto distribution with known min value and unknown
tail index ?, Pareto(xm , ?), for which T (x) = log(x), and the Weibul distribution with known shape
and unknown rate parameter, Weibull(k, ?), for which T (x) = xk . These last two distributions are
not covered even by the work in [8], and belong to the family of heavy-tailed distributions.
For the Bernoulli model, we note futher that the use of the Jeffreys prior is not covered by the
previous analyses. These analyses make an extensive use of the uniform prior, through the fact that
the coefficient of the Beta posteriors they consider have to be integers.
3
Results and Proof of Regret Bound
An exponential family K-armed bandit is a K-armed bandit for which the reward distributions pa
are known to be elements of an exponential family of distributions P(?). We denote by p?a the
distribution of arm a and its mean by ?a = ?(?a ).
Theorem 2 (Regret Bound). Assume that ?1 > ?a for all a 6= 1, and that ?a,0 is taken to be the
Jeffreys prior over ?. Then for every > 0 there exists a constant C(, P) depending on and on
the problem P such that the regret of Thompson Sampling using the Jeffreys prior satisfies
!
K
1 + X (?1 ? ?a )
ln(T ) + C(, P).
R(A?J , T ) ?
1 ? a=2 K(?a , ?1 )
Proof: We give here the main argument of the proof of the regret bound, which proceed by bounding the expected number of draws of any suboptimal arm. Along the way we shall state concentration
results whose proofs are postponed to later sections.
4
Step 0: Notation We denote by ya,s the s-th observation of arm a and by Na,t the number of times
arm a is chosen up to time t. (ya,s )s?1 is i.i.d. with distribution p?a . Let Yau := (ya,s )1?s?u be
N
the vector of first u observations from arm a. Ya,t := Ya a,t is therefore the vector of observations
from arm a available at the beginning of round t. Recall that ?a,t , respectively ?a,0 , is the posterior,
respectively the prior, on ?a at round t of the algorithm.
We define L(?) to be such that PY ?p(|?) (p(Y |?) ? L(?)) ? 12 . Observations from arm a such that
p(ya,s |?) ? L(?a ) can therefore be seen as likely observations. For any ?a > 0, we introduce the
?a,t = E
?a,t (?a ):
event E
PNa,t
!
s=1,s6=s0 T (ya,s )
0
0
?
Ea,t = ?1 ? s ? Na,t : p(ya,s0 |?a ) ? L(?a ),
? F (?a ) ? ?a . (5)
Na,t ? 1
For all a 6= 1 and ?a such that ?a < ?a + ?a < ?1 , we introduce
?
?
Ea,t
= Ea,t
(?a ) := ? (?a,t ) ? ?a + ?a .
?a,t , the empirical sufficient statistic of arm a at round t is well concentrated around its mean
On E
?
and a ?likely? realization of arm a has been observed. On Ea,t
, the mean of the distribution with
parameter ?a,t does not exceed by much the true mean, ?a . ?a and ?a will be carefully chosen at
the end of the proof.
Step 1: Decomposition The idea of the proof is to decompose the probability of playing a subopPT
timal arm using the events given in Step 0, and that E[Na,T ] = t=1 P (at = a):
T
T
T
X
X
X
?a,t , (E ? )c +
?c .
?a,t , E ? +
P
a
=
a,
E
P
a
=
a,
E
P at = a, E
E [Na,T ] =
t
t
a,t
a,t
a,t
t=1
t=1
|
{z
}
(A)
t=1
|
{z
(B)
}
|
{z
(C)
}
where E c denotes the complement of event E. Term (C) is controlled by the concentration of the
empirical sufficient statistic, and (B) is controlled by the tail probabilities of the posterior distribution. We give the needed concentration results in Step 2. When conditioned on the event that the
optimal arm is played at least polynomially often, term (A) can be decomposed further, and then
controled by the results from Step 2. Step 3 proves that the optimal arm is played this many times.
Step 2: Concentration Results We state here the two concentration results that are necessary to
evaluate the probability of the above events.
Lemma 3. Let (ys ) be an i.i.d sequence of distribution p(? | ?) and ? > 0. Then
!
u
1 X
?
0
P
[T (ys ) ? F (?)] ? ? ? 2e?uK(?,?) ,
u
s=1
? ?) = min(K(? + g(?), ?), K(? ? h(?), ?)), with g(?) > 0 defined by F 0 (? + g(?)) =
where K(?,
0
F (?) + ? and h(?) > 0 defined by F 0 (? ? h(?)) = F 0 (?) ? ?.
The two following inequalities that will be useful in the sequel can easily be deduced from Lemma
3. Their proof is gathered in Appendix A with that of Lemma 3. For any arm a, for any b ?]0, 1[,
T
? t
?
X
X
X
1
?
c
?
P(at = a, (Ea,t (?a )) ) ?
+
2te?(t?1)K(?a ,?a )
(6)
2
t=1
t=1
t=1
T
? tb
?
X
X
X
b
1
?
?a,t (?a ))c ? Na,t > tb ) ?
P((E
t
+
2t2 e?(t ?1)K(?a ,?a ) ,
(7)
2
t=1
t=1
t=1
The second result tells us that concentration of the empirical sufficient statistic around its mean
implies concentration of the posterior distribution around the true parameter:
Theorem 4 (Posterior Concentration). Let ?a,0 be the Jeffreys prior. There exists constants C1,a =
C1 (F, ?a ) > 0, C2,a = C2 (F, ?a , ?a ) > 0, and N (?a , F ) s.t., ?Na,t ? N (?a , F ),
?1
1E?a,t P ?(?a,t ) > ?(?a ) + ?a |Ya,t ? C1,a e?(Na,t ?1)(1??a C2,a )K(?a ,? (?a +?a ))+ln(Na,t )
whenever ?a < 1 and ?a are such that 1 ? ?a C2,a (?a ) > 0.
5
Step 3: Lower Bound the Number of Optimal Arm Plays with High Probability The main
difficulty adressed in previous regret analyses for Thompson Sampling is the control of the number
of draws of the optimal arm. We provide this control in the form of Proposition 5 which is adapted
from Proposition 1 in [11]. The proof of this result, an outline of which is given in Appendix D,
explores in depth the randomised nature of Thompson Sampling. In particular, we show that the
proof in [11] can be significantly simplified, but at the expense of no longer being able to describe
the constant Cb explicitly:
P?
Proposition 5. ?b ? (0, 1), ?Cb (?, ?1 , ?2 , K) < ? such that t=1 P N1,t ? tb ? Cb .
Step 4: Bounding the Terms of the Decomposition Now we bound the terms of the decomposition as discussed in Step 1: An upper bound on term (C) is given in (6), whereas a bound on term
(B) follows from Lemma 6 below. Although the proof of this lemma is standard, and bears a strong
similarity to Lemma 3 of [3], we provide it in Appendix C for the sake of completeness.
Lemma 6. For all actions a and for all > 0, ? N = N (?a , ?a , ?a ) > 0 such that
(B) ? [(1 ? )(1 ? ?a C2,a )K(?a , ??1 (?a + ?a ))]?1 ln(T ) + max{N , N (?a , F )} + 1.
where N = N (?a , ?a , ?a ) is the smallest integer such that for all n ? N
(n ? 1)?1 ln(C1,a n) < (1 ? ?a C2,a )K(?a , ??1 (?a + ?a )),
and N (?a , F ) is the constant from Theorem 4.
When we have seen enough observations on the optimal arm, term (A) also becomes a result about
the concentration of the posterior and the empirical sufficient statistic, but this time for the optimal
arm:
T
T
X
X
?
?a,t , Ea,t
(A) ?
P at = a, E
, N1,t > tb + Cb ?
P ?(?1,t ) ? ?1 ? ?0a , N1,t > tb + Cb
t=1
?
T
X
t=1
T
X
?1,t (?1 ), N1,t > tb +
? c (?1 ) ? N1,t > tb +Cb (8)
P ?(?1,t ) ? ?1 ? ?0a , E
P E
1,t
t=1
|
t=1
{z
}
B0
|
{z
C0
}
where ?0a = ?1 ? ?a ? ?a and ?1 > 0 remains to be chosen. The first inequality comes from
Proposition 5, and the second inequality comes from the following fact: if arm 1 is not chosen and
arm a is such that ?(?a,t ) ? ?a + ?a , then ?(?1,t ) ? ?a + ?a . A bound on term (C?) is given in
(7) for a = 1 and ?1 . In Theorem 4, we bound the conditional probability that ?(?a,t ) exceed the
true mean. Following the same lines, we can also show that
?1
P (?(?1,t ) ? ?1 ? ?0a |Y1,t ) 1E?1,t (?1 ) ? C1,1 e?(N1,t ?1)(1??1 C2,1 )K(?1 ,?
(?1 ??0a ))+ln(N1,t )
.
?0a
For any
> 0, one can choose ?1 such that 1 ? ?1 C1,1 > 0. Then, with N = N (P) such that the
?1
0
function u 7? e?(u?1)(1??1 C2,1 )K(?1 ,? (?1 ??a ))+ln u is decreasing for u ? N , (B 0 ) is bounded by
?
X
b
?1
0
b
N 1/b +
C1,1 e?(t ?1)(1??1 C2,1 )K(?1 ,? (?1 ??a ))+ln(t ) < ?.
t=N 1/b +1
Step 4: Choosing the Values ?a and a So far, we have shown that for any > 0 and for any
choice of ?a > 0 and 0 < ?a < ?1 ? ?a such that 1 ? ?a C2,a > 0, there exists a constant
C(?a , ?a , , P) such that
ln(T )
E[Na,T ] ?
+ C(?a , ?a , , P)
(1 ? ?a C2,a )K(?a , ??1 (?a + ?a ))(1 ? )
The constant is of course increasing (dramatically) when ?a goes to zero, ?a to ?1 ? ?a , or to
zero. But one can choose ?a close enough to ?1 ? ?a and ?a small enough, such that
K(?a , ?1 )
,
(1 ? C2,a (?a )?a )K(?a , ??1 (?a + ?a )) ?
(1 + )
and this choice leads to
1 + ln(T )
E[Na,T ] ?
+ C(?a , ?a , , P).
1 ? K(?a , ?1 )
PK
Using that R(A, T ) = a=2 (?1 ? ?a )EA [Na,T ] for any algorithm A concludes the proof.
6
4
Posterior Concentration: Proof of Theorem 4
For ease of notation, we drop the subscript a and let (ys ) be an i.i.d. sequence of distribution p? ,
with mean ? = ?(?). Furthermore, by conditioning on the value of Ns , it is enough to bound
1E?u P (?(?u ) ? ? + ?|Y u ) where Y u = (ys )1?s?u and
Pu
!
0 T (ys )
s=1,s6
=
s
?u = ?1 ? s0 ? u : p(ys0 |?) ? L(?),
E
? F 0 (?) ? ? .
u?1
Step 1: Extracting a Kullback-Leibler Rate The argument rests on the following Lemma, whose
proof can be found in Appendix B
?u be the event defined by (5), and introduce ??,? := {?0 ? ? : ?(?0 ) ? ?(?)+?}.
Lemma 7. Let E
The following inequality holds:
R
0
0
e?(u?1)(K[?,? ]??|??? |) ?(?0 |ys0 )d?0
? 0 ???,?
u
1E?u P (?(?u ) ? ? + ?|Y ) ? R
,
(9)
e?(u?1)(K[?,?0 ]+?|???0 |) ?(?0 |ys0 )d?0
? 0 ??
with s0 = inf{s ? N : p(ys |?) ? L(?)}.
Step 2: Upper bounding the numerator of (9) We first note that on ??,? the leading term in the
exponential is K(?, ?0 ). Indeed, from (3) we know that
K(?, ?0 )/|? ? ?0 | = |F 0 (?) ? (F (?) ? F (?0 ))/(? ? ?0 )|
which, by strict convexity of F , is strictly increasing in |? ? ?0 | for any fixed ?. Now since ? is
one-to-one and continuous, ?c?,? is an interval whose interior contains ?, and hence, on ??,? ,
K(?, ?0 )
F (??1 (? + ?)) ? F (?)
?
? F 0 (?) := (C2 (F, ?, ?))?1 > 0.
|? ? ?0 |
??1 (? + ?) ? ?
So for ? such that 1 ? ?C2 > 0 we can bound the numerator of (9) by:
Z
Z
0
?(u?1)(K(?,? 0 )??|??? 0 |)
0
0
0
e
?(? |ys )d? ?
e?(u?1)K(?,? )(1??C2 ) ?(?0 |ys0 )d?0
? 0 ???,?
? 0 ???,?
?1
? e?(u?1)(1??C2 )K(?,?
(?+?))
Z
?1
?(?0 |ys0 )d?0 ? e?(u?1)(1??C2 )K(?,?
(?+?))
(10)
??,?
where we have used that ?(?|ys0 ) is a probability distribution, and that, since ? is increasing,
K(?, ??1 (? + ?)) = inf ?0 ???,? K(?, ?0 ).
Step 3: Lower bounding the denominator of (9) To lower bound the denominator, we reduce
the integral on the whole space ? to a KL-ball, and use the structure of the prior to lower bound
the measure of that KL-ball under the posterior obtained with the well-chosen observation ys0 . We
introduce the following notation for KL balls: for any x ? ?, > 0, we define
B (x) := {?0 ? ? : K(x, ?0 ) ? } .
0
K(?,? )
00
We have (???
0 )2 ? F (?) 6= 0 (since F is strictly convex). Therefore, there exists N1 (?, F ) such
that for u ? N1 (?, F ), on B 12 (?),
u
p
|? ? ?0 | ? 2K(?, ?0 )/F 00 (?).
Using this inequality we can then bound the denominator of (9) whenever u ? N1 (?, F ) and ? < 1:
Z
Z
0
0
0
0
e?(u?1)(K(?,? )+?|??? |) ?(?0 |ys0 )d?0 ?
e?(u?1)(K(?,? )+?|??? |) ?(?0 |ys0 )d?0
? 0 ??
? 0 ?B1/u2 (?)
Z
?
e
r
2K(?,? 0 )
?(u?1) K(?,? 0 )+?
00
F (?)
0
0
?(? |ys0 )d? ? ? B1/u2 (?)|ys0 e
q
? 1+ F 002(?)
.
? 0 ?B1/u2 (?)
(11)
7
Finally we turn our attention to the quantity
p
R
R
p(ys0 |?0 )?0 (?0 )d?0
p(ys0 |?0 ) F 00 (?0 )d?0
B1/u2 (?)
B1/u2 (?)
R
p
? B1/u2 (?)|ys0 =
=
.
(12)
R
p(ys0 |?0 )?0 (?0 )d?0
p(ys0 |?0 ) F 00 (?0 )d?0
?
?
Now since the KL divergence is convex in the second argument, we can write B1/u2 (?) = (a, b).
So, from the convexity of F we deduce that
1
F (b) ? F (?)
0
0
=
K(?,
b)
=
F
(b)
?
[F
(?)
+
(b
?
?)F
(?)]
=
(b
?
?)
?
F
(?)
u2
(b ? ?)
0
0
0
0
? (b ? ?) [F (b) ? F (?)] ? (b ? a) [F (b) ? F (?)] ? (b ? a) [F 0 (b) ? F 0 (a)] .
As p(y
R | ?) ?p0 as y ? ??, the set C(?) = {y : p(y | ?) ? L(?)} is compact. The map
y 7? ? p(y|?0 ) F 00 (?0 )d?0 < ? is continuous on the compact C(?). Thus, it follows that
Z
p
0
0
0
0
00
0
L (?) = L (?, F ) :=
sup
p(y|? ) F (? )d? < ?
y:p(y|?)>L(?)
?
is an upper bound on the denominator of (12).
Now by the continuity of F 00 , and the continuity of (y, ?) 7? p(y|?) in both coordinates, there exists
an N2 (?, F ) such that for all u ? N2 (?, F )
p
L(?) p 00
1 F 0 (b) ? F 0 (a)
and p(y|?0 ) F 00 (?0 ) ?
F (?), ??0 ? B1/u2 (?), y ? C(?) .
F 00 (?) ?
2
b?a
2
Finally, for u ? N2 (?, F ), we have a lower bound on the numerator of (12):
Z
Z b
p
L(?) p 00
L(?) p 0
L(?)
p(ys0 |?0 ) F 00 (?0 )d?0 ?
F (?)
d?0 =
(F (b) ? F 0 (a)) (b ? a) ?
2
2
2u
B1/u2 (?)
a
Puting everything together, we get that there exist constants C2 = C2 (F, ?, ?) and N (?, F ) =
max{N1 , N2 } such that for every ? < 1 satisfying 1 ? ?C2 > 0, and for every u ? N , one has
2e
q
1+ F 002(?)
L0 (?)u
?1
e?(u?1)(1??C2 )K(?,?
(?+?))
.
L(?)
Remark 8. Note that when the prior is proper we do not need to introduce the observation ys0 ,
which significantly simplifies the argument. Indeed in this case, in (10) we can use ?0 in place of
?(?|ys0 ) which is already a probability distribution. In particular, the quantity (12) is replaced by
?0 B1/u2 (?) , and so the constants L and L0 are not needed.
1E?u P(?(?u ) ? ?(?) + ?|Yu ) ?
5
Conclusion
We have shown that choosing to use the Jeffreys prior in Thompson Sampling leads to an asymptotically optimal algorithm for bandit models whose rewards belong to a 1-dimensional canonical
exponential family. The cornerstone of our proof is a finite time concentration bound for posterior
distributions in exponential families, which, to the best of our knowledge, is new to the literature.
With this result we built on previous analyses and avoided Bernoulli-specific arguments. Thompson
Sampling with Jeffreys prior is now a provably competitive alternative to KL-UCB for exponential
family bandits. Moreover our proof holds for slightly more general problems than those for which
KL-UCB is provably optimal, including some heavy-tailed exponential family bandits.
Our arguments are potentially generalisable. Notably generalising to n-dimensional exponential
family bandits requires only generalising Lemma 3 and Step 3 in the proof of Theorem 4. Our result
is asymptotic, but the only stage where the constants are not explicitly derivable from knowledge of
F , T , and ?0 is in Lemma 9. Future work will investigate these open problems. Another possible
future direction lies the optimal choice of prior distribution. Our theoretical guarantees only hold for
Jeffreys? prior, but a careful examination of our proof shows that the important property is to have,
for every ?a ,
!
Z
?0 (?0 )d?0
? ln
= o (n) ,
(? 0 :K(?a ,? 0 )?n?2 )
which could hold for prior distributions other than the Jeffreys prior.
8
References
[1] S. Agrawal and N. Goyal. Analysis of thompson sampling for the multi-armed bandit problem.
In Conference On Learning Theory (COLT), 2012.
[2] S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling. In Sixteenth
International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[3] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. In
30th International Conference on Machine Learning (ICML), 2013.
[4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2):235?256, 2002.
[5] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities. Oxford Univeristy Press,
2013.
[6] S. Bubeck and Che-Yu Liu. A note on the bayesian regret of thompson sampling with an
arbitrairy prior. arXiv:1304.5758, 2013.
[7] O. Capp?e, A. Garivier, O-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper confidence bounds for optimal sequential allocation. Annals of Statistics, 41(3):516?541, 2013.
[8] A. Garivier and O. Capp?e. The kl-ucb algorithm for bounded stochastic bandits and beyond.
In Conference On Learning Theory (COLT), 2011.
[9] J. Honda and A. Takemura. An asymptotically optimal bandit algorithm for bounded support
models. In Conference On Learning Theory (COLT), 2010.
[10] H. Jeffreys. An invariant form for prior probability in estimation problem. Proceedings of the
Royal Society of London, 186:453?461, 1946.
[11] E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal
finite-time analysis. In Algorithmic Learning Theory, Lecture Notes in Computer Science,
pages 199?213. Springer, 2012.
[12] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
[13] B.C. May, N. Korda, A. Lee, and D. Leslie. Optimistic bayesian sampling in contextual bandit
problems. Journal of Machine Learning Research, 13:2069?2106, 2012.
[14] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. arXiv:1301.2609,
2013.
[15] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika, 25:285?294, 1933.
[16] A.W Van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
[17] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Publishing
Company, Incorporated, 2010.
9
| 5110 |@word exploitation:1 c0:1 open:1 decomposition:3 p0:3 concise:1 liu:1 contains:1 renewed:1 pna:1 contextual:2 nt:1 varx:1 yet:1 informative:1 shape:1 drop:1 intelligence:1 xk:1 beginning:1 short:1 provides:1 completeness:1 honda:1 along:1 c2:21 direct:1 beta:3 introduce:6 notably:1 indeed:2 expected:3 multi:3 decomposed:1 decreasing:1 company:1 armed:7 increasing:4 becomes:1 provided:1 moreover:2 bounded:5 underlying:1 notation:3 interpreted:1 weibull:2 maxa:1 guarantee:2 unexplored:1 nf:1 every:4 biometrika:1 uk:1 control:2 yn:2 puting:1 oxford:1 subscript:1 lugosi:1 inria:4 twice:1 studied:1 ease:1 limited:1 averaged:2 russo:1 regret:16 goyal:3 empirical:5 significantly:2 confidence:2 argmaxa:2 dmed:1 get:3 interior:1 close:2 put:2 risk:3 py:1 optimize:1 map:1 demonstrated:1 go:2 attention:1 thompson:24 focused:1 convex:3 simplicity:1 wasserman:1 rule:1 s6:2 proving:1 classic:2 notion:1 coordinate:1 annals:1 suppose:1 play:1 us:2 pa:6 element:1 roy:1 satisfying:1 adressed:1 observed:2 ensures:1 improper:1 trade:1 highest:1 intuition:1 convexity:2 reward:18 mine:1 depend:2 capp:2 easily:1 describe:2 london:1 artificial:1 tell:1 choosing:7 whose:5 heuristic:3 solve:1 drawing:1 statistic:10 fischer:1 vaart:1 online:1 sequence:2 rr:1 differentiable:2 agrawal:3 fr:3 realization:1 sixteenth:1 derive:1 dfb:1 depending:1 b0:1 sa:1 strong:1 implies:1 come:2 direction:1 stochastic:1 exploration:1 everything:1 require:1 decompose:1 proposition:4 elementary:1 strictly:3 hold:6 around:4 considered:1 normal:2 exp:5 cb:6 algorithmic:1 major:1 achieves:1 early:1 smallest:1 estimation:1 robbins:2 establishes:1 tool:2 always:2 rather:1 pn:1 l0:2 she:2 bernoulli:8 likelihood:1 sense:1 inference:1 bandit:24 provably:2 arg:1 colt:3 denoted:1 art:1 special:3 univeristy:1 once:1 sampling:27 lille:2 yu:2 icml:1 future:2 t2:1 eighty:1 employ:1 divergence:9 replaced:1 n1:11 interest:3 normalisation:2 investigate:1 parametrised:1 kt:3 bregman:1 integral:1 necessary:1 institut:1 stoltz:1 re:1 theoretical:3 korda:4 cover:1 leslie:1 uniform:4 too:1 characterize:1 kn:1 chooses:1 st:1 density:1 deduced:1 explores:1 international:2 sequel:3 lee:1 off:1 together:2 parametrisation:1 na:13 again:1 cesa:1 choose:2 possibly:2 yau:1 derivative:1 leading:1 bold:1 coefficient:2 explicitly:2 depends:1 later:1 root:1 view:1 closed:2 optimistic:4 sup:1 competitive:1 bayes:2 reparametrization:1 contribution:1 square:1 nathaniel:2 kaufmann:3 gathered:1 bayesian:4 ago:1 history:1 emilie:1 whenever:2 associated:1 proof:23 sampled:1 proved:1 popular:1 recall:1 knowledge:4 lim:2 maillard:1 carefully:1 ea:8 auer:1 improved:1 generality:1 furthermore:2 stage:1 until:1 continuity:2 name:1 building:1 true:3 hence:2 boucheron:1 leibler:6 deal:1 round:3 numerator:3 maintained:1 outline:1 complete:1 recently:1 common:1 pseudocode:2 conditioning:1 extend:2 belong:5 tail:2 discussed:1 multiarmed:1 cambridge:1 mathematics:1 europe:2 impressive:1 longer:1 similarity:1 deduce:1 pu:1 something:1 posterior:25 own:2 recent:2 inf:2 inequality:7 yi:2 postponed:1 der:1 integrable:1 seen:2 maximize:1 exceeds:1 lai:2 y:7 controlled:2 denominator:4 expectation:1 arxiv:2 sometimes:1 normalization:1 achieved:2 c1:7 whereas:2 want:2 addition:1 interval:2 crucial:1 rest:1 unlike:1 strict:1 massart:1 integer:2 extracting:1 exceed:2 enough:4 suboptimal:1 reduce:1 idea:2 simplifies:1 tradeoff:1 expression:1 proceed:1 action:3 remark:1 dramatically:1 useful:1 cornerstone:1 covered:2 induces:1 concentrated:1 exist:1 canonical:6 write:1 shall:1 controled:1 drawn:2 garivier:2 ht:2 asymptotically:5 year:2 uncertainty:1 place:1 family:31 draw:3 appendix:4 bound:32 played:2 yielded:1 adapted:2 sake:1 argument:7 optimality:3 min:2 relatively:1 according:2 ball:5 conjugate:1 slightly:2 jeffreys:27 explained:1 invariant:3 heart:1 taken:1 ln:12 remains:1 randomised:2 describing:1 turn:1 needed:2 know:1 tractable:1 end:4 available:4 alternative:1 encounter:1 denotes:1 publishing:1 exploit:1 build:1 prof:2 society:1 objective:1 already:4 quantity:3 parametric:6 concentration:18 ys0:19 said:1 che:1 parametrized:2 index:1 providing:1 statement:1 potentially:1 expense:1 nord:2 design:1 implementation:1 proper:3 unknown:4 bianchi:1 upper:7 observation:11 finite:6 t:4 payoff:1 incorporated:1 team:2 y1:3 introduced:1 complement:1 namely:2 timal:1 kl:12 extensive:2 connection:1 beyond:2 able:1 below:1 xm:4 tb:7 built:1 including:3 max:2 royal:1 event:6 natural:1 difficulty:1 examination:1 stricly:1 arm:28 concludes:1 prior:53 literature:3 asymptotic:5 lecture:1 bear:1 takemura:1 proportional:2 allocation:2 agent:2 sufficient:6 s0:4 pareto:3 playing:1 heavy:3 course:2 last:2 allow:1 face:1 munos:4 van:2 depth:1 forward:1 adaptive:1 simplified:1 avoided:1 far:1 polynomially:1 compact:2 derivable:1 kullback:6 sat:2 generalising:3 b1:10 continuous:3 tailed:3 nature:1 complex:1 aistats:1 pk:1 main:3 linearly:1 big:1 bounding:4 whole:1 n2:4 telecom:3 n:2 explicit:1 exponential:30 lie:1 theorem:11 xt:4 specific:1 showing:1 x:3 evidence:1 exists:5 sequential:1 nat:2 te:1 conditioned:1 remi:2 univariate:1 covx:1 likely:2 bubeck:1 expressed:1 u2:11 springer:2 futher:1 satisfies:1 conditional:1 goal:1 sized:1 loosing:1 careful:1 towards:1 fisher:5 paristech:2 infinite:1 lemma:11 called:5 ya:9 ucb:4 support:1 evaluate:1 ex:2 |
4,544 | 5,111 | Bayesian Mixture Modeling and Inference based
Thompson Sampling in Monte-Carlo Tree Search
Aijun Bai
Univ. of Sci. & Tech. of China
[email protected]
Feng Wu
University of Southampton
[email protected]
Xiaoping Chen
Univ. of Sci. & Tech. of China
[email protected]
Abstract
Monte-Carlo tree search (MCTS) has been drawing great interest in recent years
for planning and learning under uncertainty. One of the key challenges is the
trade-off between exploration and exploitation. To address this, we present a
novel approach for MCTS using Bayesian mixture modeling and inference based
Thompson sampling and apply it to the problem of online planning in MDPs.
Our algorithm, named Dirichlet-NormalGamma MCTS (DNG-MCTS), models
the uncertainty of the accumulated reward for actions in the search tree as a mixture of Normal distributions. We perform inferences on the mixture in Bayesian
settings by choosing conjugate priors in the form of combinations of Dirichlet
and NormalGamma distributions and select the best action at each decision node
using Thompson sampling. Experimental results confirm that our algorithm advances the state-of-the-art UCT approach with better values on several benchmark
problems.
1
Introduction
Markov decision processes (MDPs) provide a general framework for planning and learning under
uncertainty. We consider the problem of online planning in MDPs without prior knowledge on the
underlying transition probabilities. Monte-Carlo tree search (MCTS) can find near-optimal policies
in our domains by combining tree search methods with sampling techniques. The key idea is to iteratively evaluate each state in a best-first search tree by the mean outcome of simulation samples. It is
model-free and requires only a black-box simulator (generative model) of the underlying problems.
To date, great success has been achieved by MCTS in variety of domains, such as game play [1, 2],
planning under uncertainty [3, 4, 5], and Bayesian reinforcement learning [6, 7].
When applying MCTS, one of the fundamental challenges is the so-called exploration versus exploitation dilemma: an agent must not only exploit by selecting the best action based on the current
information, but should also keep exploring other actions for possible higher future payoffs. Thompson sampling is one of the earliest heuristics to address this dilemma in multi-armed bandit problems
(MABs) according to the principle of randomized probability matching [8]. The basic idea is to select actions stochastically, based on the probabilities of being optimal. It has recently been shown
to perform very well in MABs both empirically [9] and theoretically [10]. It has been proved that
Thompson sampling algorithm achieves logarithmic expected regret which is asymptotically optimal for MABs. Comparing to the UCB1 heuristic [3], the main advantage of Thompson sampling
is that it allows more robust convergence under a wide range of problem settings.
In this paper, we borrow the idea of Thompson sampling and propose the Dirichlet-NormalGamma
MCTS (DNG-MCTS) algorithm ? a novel Bayesian mixture modeling and inference based Thompson sampling approach for online planning in MDPs. In this algorithm, we use a mixture of Normal
distributions to model the unknown distribution of the accumulated reward of performing a particular action in the MCTS search tree. In the present of online planning for MDPs, a conjugate prior
1
exists in the form of a combination of Dirichlet and NormalGamma distributions. By choosing the
conjugate prior, it is then relatively simple to compute the posterior distribution after each accumulated reward is observed by simulation in the search tree. Thompson sampling is then used to select
the action to be performed by simulation at each decision node. We have tested our DNG-MCTS
algorithm and compared it with the popular UCT algorithm in several benchmark problems. Experimental results show that our proposed algorithm has outperformed the state-of-the-art for online
planning in general MDPs. Furthermore, we show the convergence of our algorithm, confirming its
technical soundness.
The reminder of this paper is organized as follows. In Section 2, we briefly introduce the necessary background. Section 3 presents our main results ? the DNG-MCTS algorithm. We show
experimental results on several benchmark problems in Section 4. Finally in Section 5 the paper is
concluded with a summary of our contributions and future work.
2
Background
In this section, we briefly review the MDP model, the MAB problem, the MCTS framework, and
the UCT algorithm as the basis of our algorithm. Some related work is also presented.
2.1
MDPs and MABs
Formally, an MDP is defined as a tuple hS, A, T, Ri, where S is the state space, A is the action
space, T (s0 |s, a) is the probability of reaching state s0 if action a is applied in state s, and R(s, a)
is the reward received by the agent. A policy is a decision rule mapping from states to actions and
specifying which action should be taken in each state. The aim of solving an MDP is to find the
PH
optimal policy ? that maximizes the expected reward defined as V? (s) = E[ t=0 ? t R(st , ?(st ))],
where H is the planing horizon, ? ? (0, 1] is the discount factor, st is the state in time step t and
?(st ) is the action selected by policy ? in state st .
Intuitively, an MAB can be seen as an MDP with only one state s and a stochastic reward function
R(s, a) := Xa , where Xa is a random variable following an unknown distribution fXa (x). At each
time step t, one action at must be chosen and executed. A stochastic reward Xat is then received
accordingly. The goal is to find a sequence of actions that minimizes the cumulative regret defined
PT
as RT = E[ t=1 (Xa? ? Xat )], where a? is the true best action.
2.2
MCTS and UCT
To solve MDPs, MCTS iteratively evaluates a state by: (1) selecting an action based on a given action
selection strategy; (2) performing the selected action by Monte-Carlo simulation; (3) recursively
evaluating the resulted state if it is already in the search tree, or inserting it into the search tree and
running a rollout policy by simulations. This process is applied to descend through the search tree
until some terminate conditions are reached. The simulation result is then back-propagated through
the selected nodes to update their statistics.
The UCT algorithm is a popular approach based on MCTS for planning under uncertainty [3]. It
treats each state of the
p search tree as an MAB, and selects the action that maximizes the UCB1
? a) + c log N (s)/N (s, a), where Q(s,
? a) is the mean return of action a in state s
heuristic Q(s,
from all previous simulations,
N
(s,
a)
is
the
visitation
count of action a in state s, N (s) is the
P
overall count N (s) = a?A N (s, a), and c is the exploration constant that determines the relative
ratio of exploration to exploitation. It is proved that with an appropriate choice of c the probability
of selecting the optimal action converges to 1 as the number of samples grows to infinity.
2.3
Related Work
The fundamental assumption of our algorithm is modeling unknown distribution of the accumulated
reward for each state-action pair in the search tree as a mixture of Normal distributions. A similar
assumption has been made in [11], where they assumed a Normal distribution over the rewards.
Comparing to their approach, as we will show in Section 3, our assumption on Normal mixture is
more realistic for our problems. Tesauro et al.[12] developed a Bayesian UCT approach to MCTS
2
using Gaussian approximation. Specifically, their method propagates probability distributions of
rewards from leaf nodes up to the root node by applying MAX (or MIN) extremum distribution
operator for the interior nodes. Then, it uses modified UCB1 heuristics to select actions on the basis
of the interior distributions. However, extremum distribution operation on decision nodes is very
time-consuming because it must consider over all the child nodes. In contrast, we treat each decision
node in the search tree as an MAB, maintain a posterior distribution over the accumulated reward
for each applicable actions separately, and then select the best action using Thompson sampling.
3
The DNG-MCTS Algorithm
This section presents our main results ? a Bayesian mixture modeling and inference based Thompson sampling approach for MCTS (DNG-MCTS).
3.1
The Assumptions
For a given MDP policy ?, let Xs,? be a random variable that denotes the accumulated reward
of following policy ? starting from state s, and Xs,a,? denotes the accumulated reward of first
performing action a in state s and then following policy ? thereafter. Our assumptions are: (1)
Xs,? is sampled from a Normal distribution, and (2) Xs,a,? can be modeled as a mixture of Normal
distributions. These are realistic approximations for our problems with the following reasons.
Given policy ?, an MDP reduces to a Markov chain {st } with finite state space S and the transition
function T (s0 |s, ?(s)). Suppose that the resulting chain {st } is ergodic. That is, it is possible to
go from every state to every other state (not necessarily in one move). Let w denote the stationary
distribution of {st }. According to the central limit theorem on Markov chains [13, 14], for any
bounded function f on the state space S, we have:
n
1 X
? (
f (st ) ? n?) ? N (0, ? 2 ) as n ? ?,
(1)
n t=0
where ? = Ew [f ] and ? is a constant depending only on f and w. This indicates that the sum of
2
f (sP
t ) follows N (n?, n? ) as n grows to infinity. It is then natural to approximate the distribution
n
of t=0 f (st ) as a Normal distribution if n is sufficiently large.
PH
Considering finite-horizon MDPs with horizon H, if ? = 1, Xs0 ,? = t=0 R(st , ?(st )) is a sum
of f (st ) = R(st , ?(st )). Thus, Xs0 ,? is approximately normally distributed for each s0 ? S if H is
PH t
sufficiently large. On the
Pnother hand, if ? 6= 1, Xs0 ,? = t=0 ? R(st , ?(st )) can be rewritten as a
linear combination of t=0 f (st ) for n = 0 to H as follow:
Xs0 ,? = (1 ? ?)
H?1
X
n=0
?n
n
X
t=0
f (st ) + ? H
H
X
f (st )
(2)
t=0
Notice that a linear combination of independent or correlated normally distributed random variables
is still normally distributed. If H is sufficiently large and ? is close to 1, it is reasonable to approximate Xs0 ,? as a Normal distribution. Therefore, we assume that Xs,? is normally distributed in both
cases.
If the policy ? is not fixed and may change over time (e.g., the derived policy of an online algorithm
before it converges), the real distribution of Xs,? is actually unknown and could be very complex.
However, if the algorithm is guaranteed to converge in the limit (as explained in Section 3.5, this
holds for our DNG-MCTS algorithm), it is convenient and reasonable to approximate Xs,? as a
Normal distribution.
Now consider the accumulated reward of first performing action a in s and following policy ?
thereafter. By definition, Xs,a,? = R(s, a) + ?Xs0 ,? , where s0 is the next state distributed according
to T (s0 |s, a). Let Ys,a,? be a random variable defined as Ys,a,? = (Xs,a,? ? R(s, a))/?. We can see
0
that the pdf of Ys,a,?
P is a convex combination of the pdfs of Xs0 ,? for each s ? S. Specifically, we
have fYs,a,? (y) = s0 ?S T (s0 |s, a)fXs0 ,? (y). Hence it is straightforward to model the distribution
of Ys,a,? as a mixture of Normal distributions if Xs0 ,? is assumed to be normally distributed for each
s0 ? S. Since Xs,a,? is a linear function of Ys,a,? , Xs,a,? is also a mixture of Normal distributions
under our assumptions.
3
3.2
The Modeling and Inference Methods
In Bayesian settings, the unknown distribution of a random variable X can be modeled as a parametric likelihood function L(x|?) depending on the parameters ?. Given a prior distribution P (?),
and a set of past observations ZQ= {x1 , x2 , . . . }, the posterior distribution of ? can then be obtained
using Bayes? rules: P (?|Z) ? i L(xi |?)P (?).
Assumption (1) implies that it suffices to model the distribution of Xs,? as a Normal likelihood
N (?s , 1/?s ) with unknown mean ?s and precision ?s . The precision is defined as the reciprocal of the variance, ? = 1/? 2 . This is chosen for mathematical convenience of introducing the
NomralGamma distribution as a conjugate prior. A NormalGamma distribution is defined by the
hyper-parameters h?0 , ?, ?, ?i with ? > 0, ? ? 1 and ? ? 0. It is said that (?, ? ) follows a
NormalGamma distribution N ormalGamma(?0 , ?, ?, ?) if the pdf of (?, ? ) has the form
?
?? (???0 )2
1
?? ?
2
? ? ?? 2 e??? e?
f (?, ? |?0 , ?, ?, ?) =
.
(3)
?(?) 2?
By definition, the marginal distribution over ? is a Gamma distribution, ? ? Gamma(?, ?), and
the conditional distribution over ? given ? is a Normal distribution, ? ? N (?0 , 1/(?? )).
Let us briefly recall the posterior of (?, ? ). Suppose X is normally distributed with unknown mean
? and precision ? , x ? N (?, 1/? ), and that the prior distribution of (?, ? ) has a NormalGamma
distribution, (?, ? ) ? N ormalGamma(?0 , ?0 , ?0 , ?0 ). After observing n independent samples of
X, denoted {x1 , x2 , . . . , xn }, according to the Bayes? theorem, the posterior distribution of (?, ? )
is also a NormalGamma distribution, (?, ? ) ? N ormalGamma(?n , ?n , ?n , ?n ), where ?n =
(?0 ?0 +n?
x)/(?
x ??0 )2 /(?0 +n))/2,
0 n(?
Pn 0 +n), ?n = ?0 +n, ?n = ?0 +n/2 and
Pn ?n = ?0 +(ns+?
where x
? = i=1 xi /n is the sample mean and s = i=1 (xi ? x
?)2 /n is the sample variance.
Based on Assumption 2, the distribution ofPYs,a,? can be modeled as a mixture of Normal distributions Ys,a,? = (Xs,a,? ? R(s, a))/? ? s0 ?S ws,a,s0P
N (?s0 , 1/?s0 ), where ws,a,s0 = T (s0 |s, a)
0
are the mixture weights such that ws,a,s0 ? 0 and
s0 ?S ws,a,s = 1, which are previously unknown in Monte-Carlo settings. A natural representation on these unknown weights is via
Dirichlet distributions, since Dirichlet distribution is the conjugate prior of a general discrete probability distribution. For state s and action a, a Dirichlet distribution, denoted Dir(?s,a ) where
?s,a = (?s,a,s1 , ?s,a,s2 , ? ? ? ), gives the posterior distribution of T (s0 |s, a) for each s0 ? S if the
transition to s0 has been observed ?s,a,s0 ? 1 times. After observing a transition (s, a) ? s0 , the
posterior distribution is also Dirichlet and can simply be updated as ?s,a,s0 ? ?s,a,s0 + 1.
Therefore, to model the distribution of Xs,? and Xs,a,? we only need to maintain a set of hyperparameters h?s,0 , ?s , ?s , ?s i and ?s,a for each state s and action a encountered in the MCTS search
tree and update them by using Bayes? rules.
Now we turn to the question of how to choose the priors by initializing hyper-parameters. While the
impact of the prior tends to be negligible in the limit, its choice is important especially when only
a small amount of data has been observed. In general, priors should reflect available knowledge of
the hidden model.
In the absence of any knowledge, uninformative priors may be preferred. According to the principle
of indifference, uninformative priors assign equal probabilities to all possibilities. For NormalGamma priors, we hope that the sampled distribution of ? given ? , i.e., N (?0 , 1/(?? )), is as flat as
possible. This implies an infinite variance 1/(?? ) ? ?, so that ?? ? 0. Recall that ? follows
a Gamma distribution Gamma(?, ?) with expectation E[? ] = ?/?, so we have in expectation
??/? ? 0. Considering the parameter space (? > 0, ? ? 1, ? ? 0), we can choose ? small
enough, ? = 1 and ? sufficiently large to approximate this condition. Second, we hope the sampled
distribution is in the middle of axis, so ?0 = 0 seems to be a good selection. It is worth noting that
intuitively ? should not be set too large, or the convergence process may be very slow. For Dirichlet
priors, it is common to set ?s,a,s0 = ? where ? is a small enough positive for each s ? S, a ? A and
s0 ? S encountered in the search tree to have uninformative priors.
On the other hand, if some prior knowledge is available, informative priors may be preferred. By
exploiting domain knowledge, a state node can be initialized with informative priors indicating its priority over other states. In DNG-MCTS, this is done by setting the hyper-parameters based
4
on subjective estimation for states. According to the interpretation of hyper-parameters of NormalGamma distribution in terms of pseudo-observations, if one has a prior mean of ?0 from ?
samples and a prior precision of ?/? from 2? samples, the prior distribution over ? and ? is
N ormalGamma(?0 , ?, ?, ?), providing a straightforward way to initialize the hyper-parameters
if some prior knowledge (such as historical data of past observations) is available. Specifying detailed priors based on prior knowledge for particular domains is beyond the scope of this paper. The
ability to include prior information provides important flexibility and can be considered an advantage of the approach.
3.3
The Action Selection Strategy
In DNG-MCTS, action selection strategy is derived using Thompson sampling. Specifically, in
general Bayesian settings, action a is chosen with probability:
Y
Z
P (a) = 1 a = argmax E [Xa0 |?a0 ]
Pa0 (?a0 |Z) d?
(4)
a0
a0
where 1 is the indicator function, ?a is theRhidden parameter prescribing the underlying distribution of reward by applying a, E[Xa |?a ] = xLa (x|?a ) dx is the expectation of Xa given ?a , and
? = (?a1 , ?a2 , . . . ) is the vector of parameters for all actions. Fortunately, this can efficiently be
approached by sampling method. To this end, a set of parameters ?a is sampled according to the posterior distributions Pa (?a |Z) for each a ? A, and the action a? = argmaxa E[Xa |?a ] with highest
expectation is selected.
In our implementation, at each decision node s of the search tree, we sample the mean ?s0 and
mixture weights ws,a,s0 according to N ormalGamma(?s0 ,0 , ?s0 , ?s0 , ?s0 ) and Dir(?s,a ) respec0
tively for each
P possible next state s ? S. The expectation of Xs,a,? is then computed as
R(s, a) + ? s0 ?S ws,a,s0 ?s0 . The action with highest expectation is then selected to be performed
in simulation.
3.4
The Main Algorithm
The main process of DNG-MCTS is outlined in Figure 1. It is worth noting that the function
ThompsonSampling has a boolean parameter sampling. If sampling is true, Thompson sampling method is used to select the best action as explained in Section 3.3, otherwise a greedy action
is returned with respect to the current expected
P transition probabilities and accumulated rewards of
next states, which are E[ws,a,s0 ] = ?s,a,s0 / x?S ?s,a,x and E[Xs,? ] = ?s,0 respectively.
At each iteration, the function DNG-MCTS uses Thompson sampling to recursively select actions
to be executed by simulation from the root node to leaf nodes through the existing search tree T .
It inserts each newly visited node into the tree, plays a default rollout policy from the new node,
and propagates the simulated outcome to update the hyper-parameters for visited states and actions.
Noting that the rollout policy is only played once for each new node at each iteration, the set of past
observations Z in the algorithm has size n = 1.
The function OnlinePlanning is the overall procedure interacting with the real environment. It is
called with current state s, search tree T initially empty and the maximal horizon H. It repeatedly
calls the function DNG-MCTS until some resource budgets are reached (e.g., the computation is
timeout or the maximal number of iterations is reached), by when a greedy action to be performed
in the environment is returned to the agent.
3.5
The Convergency Property
For Thompson sampling in stationary MABs (i.e., the underlying reward function will not change), it
is proved that: (1) the probability of selecting any suboptimal action a at the current step is bounded
by a linear function of the probability of selecting the optimal action; (2) the coefficient in this linear
function decreases exponentially fast with the increase in the number of selection of optimal action
[15]. Thus, the probability of selecting the optimal action in an MAB is guaranteed to converge to 1
in the limit using Thompson sampling.
5
OnlinePlanning(s : state, T : tree,
H : max horizon)
Initialize (?s,0 , ?s , ?s , ?s ) for each s ? S
Initialize ?s,a for each s ? S and a ? A
repeat
DNG-MCTS(s, T, H)
until resource budgets reached
return ThompsonSampling(s, H, F alse)
DNG-MCTS(s : state, T : tree, h : horizon)
if h = 0 or s is terminal then
return 0
else if node hs, hi is not in tree T then
Add node hs, hi to T
Play rollout policy by simulation for h steps
Observe the outcome r
return r
else
a ? ThompsonSampling(s, h, T rue)
Execute a by simulation
Observe next state s0 and reward R(s, a)
r ? R(s, a) + ? DNG-MCTS(s0 , T, h ? 1)
?s ? ?s + 0.5
?s ? ?s + (?s (r ? ?s,0 )2 /(?s + 1))/2
?s,0 ? (?s ?s,0 + r)/(?s + 1)
?s ? ?s + 1
?s,a,s0 ? ?s,a,s0 + 1
return r
ThompsonSampling(s : state, h : horizon,
sampling : boolean)
foreach a ? A do
qa ? QValue(s, a, h, sampling)
return argmaxa qa
QValue(s : state, a : action, h : horizon,
sampling : boolean)
r?0
foreach s0 ? S do
if sampling = T rue then
Sample ws0 according to Dir(?s,a )
else
ws0 ? ?s,a,s0 /
P
n?S
?s,a,n
r ? r + ws0 Value(s0 , h ? 1, sampling)
return R(s, a) + ?r
Value(s : state, h : horizon,
sampling : boolean)
if h = 0 or s is terminal then
return 0
else
if sampling = T rue then
Sample (?, ? ) according to
N ormalGamma(?s,0 , ?s , ?s , ?s )
return ?
else
return ?s,0
Figure 1: Dirichlet-NormalGamma based Monte-Carlo Tree Search
The distribution of Xs,? is determined by the transition function and the Q values given the policy ?.
When the Q values converge, the distribution of Xs,? becomes stationary with the optimal policy.
For the leaf nodes (level H) of the search tree, Thompson sampling will converge to the optimal
actions with probability 1 in the limit since the MABs are stationary. When all the leaf nodes
converge, the distributions of return values from them will not change. So the MABs of the nodes in
level H ? 1 become stationary as well. Thus, Thompson sampling will also converge to the optimal
actions for nodes in level H ? 1. Recursively, this holds for all the upper-level nodes. Therefore, we
conclude that DNG-MCTS can find the optimal policy for the root node if unbounded computational
resources are given.
4
Experiments
We have tested our DNG-MCTS algorithm and compared the results with UCT in three common
MDP benchmark domains, namely Canadian traveler problem, racetrack and sailing. These problems are modeled as cost-based MDPs. That is, a cost function c(s, a) is used instead of the reward
function R(s, a), and the min operator is used in the Bellman equation instead of the max operator.
Similarly, the objective of solving a cost-based MDPs is to find an optimal policy that minimizes
the expected accumulated cost for each state. Notice that algorithms developed for reward-based
MDPs can be straightforwardly transformed and applied to cost-based MDPs by simply using the
min operator instead of max in the Bellman update routines. Accordingly, the min operator is used
in the function ThompsonSampling of our transformed DNG-MCTS algorithm. We implemented
our codes and conducted the experiments on the basis of MDP-engine, which is an open source
software package with a collection of problem instances and base algorithms for MDPs.1
1
MDP-engine can be publicly accessed via https://code.google.com/p/mdp-engine/
6
Table 1: CTP problems with 20 nodes. The second column indicates the belief size of the transformed MDP for each problem instance. UCTB and UCTO are the two domain-specific UCT implementations [18]. DNG-MCTS and UCT run for 10,000 iterations. Boldface fonts are best in
whole table; gray cells show best among domain-independent implementations for each group. The
data of UCTB, UCTO and UCT are taken form [16].
domain-specific UCT
prob.
20-1
20-2
20-3
20-4
20-5
20-6
20-7
20-8
20-9
20-10
total
belief
49
20 ? 3
20 ? 349
20 ? 351
20 ? 349
20 ? 352
20 ? 349
20 ? 350
20 ? 351
20 ? 350
20 ? 349
random rollout policy
optimistic rollout policy
UCTB
UCTO
UCT
DNG
UCT
DNG
210.7?7
176.4?4
150.7?7
264.8?9
123.2?7
165.4?6
191.6?6
160.1?7
235.2?6
180.8?7
1858.9
169.0?6
148.9?3
132.5?6
235.2?7
111.3?5
133.1?3
148.2?4
134.5?5
173.9?4
167.0?5
1553.6
216.4?3
178.5?2
169.7?4
264.1?4
139.8?4
178.0?3
211.8?3
218.5?4
251.9?3
185.7?3
2014.4
223.9?4
178.1?2
159.5?4
266.8?4
133.4?4
169.8?3
214.9?4
202.3?4
246.0?3
188.9?4
1983.68
180.7?3
160.8?2
144.3?3
238.3?3
123.9?3
167.8?2
174.1?2
152.3?3
185.2?2
178.5?3
1705.9
177.1?3
155.2?2
140.1?3
242.7?4
122.1?3
141.9?2
166.1?3
151.4?3
180.4?2
170.5?3
1647.4
In each benchmark problem, we (1) ran the transformed algorithms for a number of iterations from
the current state, (2) applied the best action based on the resulted action-values, (3) repeated the loop
until terminating conditions (e.g., a goal state is satisfied or the maximal number of running steps
is reached), and (4) reported the total discounted cost. The performance of algorithms is evaluated
by the average value of total discounted costs over 1,000 independent runs. In all experiments,
(?s,0 , ?s , ?s , ?s ) is initialized to (0, 0.01, 1, 100), and ?s,a,s0 is initialized to 0.01 for all s ? S,
a ? A and s0 ? S. For fair comparison, we also use the same settings as in [16]: for each decision
node, (1) only applicable actions are selected, (2) applicable actions are forced to be selected once
before any of them are selected twice or more, and 3) the exploration constant for the UCT algorithm
is set to be the current mean action-values Q(s, a, d).
The Canadian traveler problem (CTP) is a path finding problem with imperfect information over a
graph whose edges may be blocked with given prior probabilities [17]. A CTP can be modeled as
a deterministic POMDP, i.e., the only source of uncertainty is the initial belief. When transformed
to an MDP, the size of the belief space is n ? 3m , where n is the number of nodes and m is the
number of edges. This problem has a discount factor ? = 1. The aim is to navigate to the goal state
as quickly as possible. It has recently been addressed by an anytime variation of AO*, named AOT
[16], and two domain-specific implementations of UCT which take advantage of the specific MDP
structure of the CTP and use a more informed base policy, named UCTB and UCTO [18]. In this
experiment, we used the same 10 problem instances with 20 nodes as done in their papers.
When running DNG-MCTS and UCT in those CTP instances, the number of iterations for each
decision-making was set to be 10,000, which is identical to [16]. Two types of default rollout policy
were tested: the random policy that selects actions with equal probabilities and the optimistic policy
that assumes traversability for unknown edges and selects actions according to estimated cost. The
results are shown in Table 1. Similar to [16], we included the results of UCTB and UCTO as
a reference. From the table, we can see that DNG-MCTS outperformed the domain-independent
version of UCT with random rollout policy in several instances, and particularly performed much
better than UCT with optimistic rollout policy. Although DNG-MCTS is not as good as domainspecific UCTO, it is competitive comparing to the general UCT algorithm in this domain.
The racetrack problem simulates a car race [19], where a car starts in a set of initial states and moves
towards the goal. At each time step, the car can choose to accelerate to one of the eight directions.
When moving, the car has a possibility of 0.9 to succeed and 0.1 to fail on its acceleration. We
tested DNG-MCTS and UCT with random rollout policy and horizon H = 100 in the instance of
barto-big, which has a state space with size |S| = 22534. The discount factor is ? = 0.95 and the
optimal cost produced is known to be 21.38. We reported the curve of the average cost as a function
of the number of iterations in Figure 2a. Each data point in the figure was averaged over 1,000
7
UCT
DNG-MCTS
avg. accumulated cost
90
80
70
60
50
40
30
20
1
10
100
1000
60
avg. accumulated cost
100
50
45
40
35
30
25
10000 100000
number of iterations
UCT
DNG-MCTS
55
1
10
100
1000
10000
100000
number of iterations
(b) Sailing-100 ? 100 with random policy
(a) Racetrack-barto-big with random policy
Figure 2: Performance curves for Racetrack and Sailing
runs, each of which was allowed for running at most 100 steps. It can be seen from the figure that
DNG-MCTS converged faster than UCT in terms of sample complexity in this domain.
The sailing domain is adopted from [3]. In this domain, a sailboat navigates to a destination on
an 8-connected grid. The direction of the wind changes over time according to prior transition
probabilities. The goal is to reach the destination as quickly as possible, by choosing at each grid
location a neighbour location to move to. The discount factor in this domain is ? = 0.95 and the
maximum horizon is set to be H = 100. We ran DNG-MCTS and UCT with random rollout policy
in a 100 ? 100 instance of this domain. This instance has 80000 states and the optimal cost is 26.08.
The performance curve is shown in Figure 2b. A trend similar to the racetrack problem can be
observed in the graph: DNG-MCTS converged faster than UCT in terms of sample complexity.
Regarding computational complexity, although the total computation time of our algorithm is linear
with the total sample size, which is at most width ? depth (width is the number of iterations
and depth is the maximal horizon), our approach does require more computation than simple UCT
methods. Specifically, we observed that most of the computation time of DNG-MCTS is due to the
sampling from distributions in Thompson sampling. Thus, DNG-MCTS usually consumes more
time than UCT in a single iteration. Based on our experimental results on the benchmark problems,
DNG-MCTS typically needs about 2 to 4 times (depending on problems and the iterating stage of
the algorithms) of computational time more than UCT algorithm for a single iteration. However,
if the simulations are expensive (e.g., computational physics in 3D environment where the cost of
executing the simulation steps greatly exceeds the time needed by action-selection steps in MCTS),
DNG-MCTS can obtain much better performance than UCT in terms of computational complexity
because DNG-MCTS is expected to have lower sample complexity.
5
Conclusion
In this paper, we proposed our DNG-MCTS algorithm ? a novel Bayesian modeling and inference
based Thompson sampling approach using MCTS for MDP online planning. The basic assumption
of DNG-MCTS is modeling the uncertainty of the accumulated reward for each state-action pair as
a mixture of Normal distributions. We presented the overall Bayesian framework for representing,
updating, decision-making and propagating of probability distributions over rewards in the MCTS
search tree. Our experimental results confirmed that, comparing to the general UCT algorithm,
DNG-MCTS produced competitive results in the CTP domain, and converged faster in the domains
of racetrack and sailing with respect to sample complexity. In the future, we plan to extend our basic
assumption to using more complex distributions and test our algorithm on real-world applications.
8
Acknowledgements
This work is supported in part by the National Hi-Tech Project of China under grant 2008AA01Z150
and the Natural Science Foundation of China under grant 60745002 and 61175057. Feng Wu is
supported in part by the ORCHID project (http://www.orchid.ac.uk). We are grateful to
the anonymous reviewers for their constructive comments and suggestions.
References
[1] S. Gelly and D. Silver. Monte-carlo tree search and rapid action value estimation in computer
go. Artificial Intelligence, 175(11):1856?1875, 2011.
[2] Mark HM Winands, Yngvi Bjornsson, and J Saito. Monte carlo tree search in lines of action.
IEEE Transactions on Computational Intelligence and AI in Games, 2(4):239?250, 2010.
[3] L. Kocsis and C. Szepesv?ari. Bandit based monte-carlo planning. In European Conference on
Machine Learning, pages 282?293, 2006.
[4] D. Silver and J. Veness. Monte-carlo planning in large pomdps. In Advances in Neural Information Processing Systems, pages 2164?2172, 2010.
[5] Feng Wu, Shlomo Zilberstein, and Xiaoping Chen. Online planning for ad hoc autonomous
agent teams. In International Joint Conference on Artificial Intelligence, pages 439?445, 2011.
[6] Arthur Guez, David Silver, and Peter Dayan. Efficient bayes-adaptive reinforcement learning
using sample-based search. In Advances in Neural Information Processing Systems, pages
1034?1042, 2012.
[7] John Asmuth and Michael L. Littman. Learning is planning: near bayes-optimal reinforcement
learning via monte-carlo tree search. In Uncertainty in Artificial Intelligence, pages 19?26,
2011.
[8] William R. Thompson. On the likelihood that one unknown probability exceeds another in
view of the evidence of two samples. Biometrika, 25:285?294, 1933.
[9] Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances
Neural Information Processing Systems, pages 2249?2257, 2011.
[10] Emilie Kaufmann, Nathaniel Korda, and R?emi Munos. Thompson sampling: An optimal finite
time analysis. In Algorithmic Learning Theory, pages 199?213, 2012.
[11] Richard Dearden, Nir Friedman, and Stuart Russell. Bayesian q-learning. In AAAI Conference
on Artificial Intelligence, pages 761?768, 1998.
[12] Gerald Tesauro, V. T. Rajan, and Richard Segal. Bayesian inference in monte-carlo tree search.
In Uncertainty in Artificial Intelligence, pages 580?588, 2010.
[13] Galin L Jones. On the markov chain central limit theorem. Probability surveys, 1:299?320,
2004.
[14] Anirban DasGupta. Asymptotic theory of statistics and probability. Springer, 2008.
[15] Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In
Artificial Intelligence and Statistics, pages 99?107, 2013.
[16] Blai Bonet and Hector Geffner. Action selection for mdps: Anytime ao* vs. uct. In AAAI
Conference on Artificial Intelligence, pages 1749?1755, 2012.
[17] Christos H Papadimitriou and Mihalis Yannakakis. Shortest paths without a map. Theoretical
Computer Science, 84(1):127?150, 1991.
[18] Patrick Eyerich, Thomas Keller, and Malte Helmert. High-quality policies for the canadian
traveler?s problem. In AAAI Conference on Artificial Intelligence, pages 51?58, 2010.
[19] A.G. Barto, S.J. Bradtke, and S.P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1-2):81?138, 1995.
9
| 5111 |@word h:3 exploitation:3 version:1 briefly:3 middle:1 seems:1 hector:1 open:1 simulation:13 recursively:3 initial:2 bai:1 selecting:6 past:3 subjective:1 existing:1 current:6 comparing:4 com:1 dx:1 must:3 guez:1 john:1 realistic:2 informative:2 confirming:1 shlomo:1 update:4 v:1 stationary:5 generative:1 selected:8 leaf:4 greedy:2 intelligence:10 accordingly:2 reciprocal:1 provides:1 node:28 location:2 accessed:1 unbounded:1 rollout:11 mathematical:1 become:1 introduce:1 theoretically:1 expected:5 rapid:1 planning:14 simulator:1 multi:1 terminal:2 bellman:2 discounted:2 armed:1 considering:2 becomes:1 project:2 underlying:4 bounded:2 maximizes:2 minimizes:2 developed:2 informed:1 extremum:2 finding:1 pseudo:1 every:2 act:1 biometrika:1 uk:2 normally:6 grant:2 before:2 negligible:1 positive:1 treat:2 tends:1 limit:6 normalgamma:11 path:2 approximately:1 black:1 twice:1 china:4 specifying:2 range:1 averaged:1 regret:3 goyal:1 procedure:1 saito:1 empirical:1 matching:1 convenient:1 argmaxa:2 convenience:1 interior:2 selection:7 operator:5 close:1 convergency:1 applying:3 www:1 deterministic:1 reviewer:1 map:1 go:2 straightforward:2 starting:1 keller:1 thompson:24 ergodic:1 convex:1 pomdp:1 survey:1 rule:3 borrow:1 variation:1 autonomous:1 updated:1 pt:1 play:3 suppose:2 olivier:1 programming:1 us:2 pa:1 trend:1 expensive:1 particularly:1 updating:1 observed:5 yngvi:1 initializing:1 descend:1 connected:1 trade:1 highest:2 decrease:1 consumes:1 ran:2 russell:1 environment:3 complexity:6 reward:22 littman:1 dynamic:1 gerald:1 terminating:1 grateful:1 solving:2 singh:1 dilemma:2 basis:3 shipra:1 accelerate:1 joint:1 traveler:3 univ:2 forced:1 fast:1 monte:12 artificial:9 approached:1 hyper:6 choosing:3 outcome:3 whose:1 heuristic:4 solve:1 drawing:1 otherwise:1 ability:1 soundness:1 statistic:3 online:8 timeout:1 traversability:1 sequence:1 advantage:3 kocsis:1 hoc:1 agrawal:1 propose:1 maximal:4 inserting:1 combining:1 loop:1 date:1 flexibility:1 exploiting:1 convergence:3 empty:1 silver:3 converges:2 executing:1 depending:3 ac:2 propagating:1 planing:1 received:2 implemented:1 implies:2 direction:2 stochastic:2 exploration:5 require:1 assign:1 suffices:1 ao:2 anonymous:1 mab:5 exploring:1 insert:1 hold:2 sufficiently:4 considered:1 normal:16 great:2 mapping:1 algorithmic:1 scope:1 achieves:1 a2:1 estimation:2 outperformed:2 applicable:3 visited:2 hope:2 gaussian:1 aim:2 modified:1 reaching:1 pn:2 barto:3 earliest:1 zilberstein:1 derived:2 pdfs:1 indicates:2 likelihood:3 tech:3 contrast:1 greatly:1 inference:8 dayan:1 accumulated:13 prescribing:1 typically:1 a0:4 initially:1 w:7 bandit:2 hidden:1 transformed:5 selects:3 s0p:1 overall:3 among:1 denoted:2 plan:1 art:2 initialize:3 marginal:1 equal:2 once:2 veness:1 sampling:35 identical:1 stuart:1 jones:1 yannakakis:1 future:3 papadimitriou:1 richard:2 neighbour:1 gamma:4 resulted:2 national:1 argmax:1 maintain:2 william:1 friedman:1 interest:1 possibility:2 evaluation:1 mixture:16 winands:1 xat:2 chain:4 tuple:1 edge:3 necessary:1 arthur:1 tree:30 initialized:3 theoretical:1 korda:1 instance:8 column:1 modeling:8 boolean:4 cost:14 introducing:1 southampton:1 conducted:1 too:1 reported:2 straightforwardly:1 dir:3 st:20 fundamental:2 randomized:1 international:1 destination:2 off:1 physic:1 michael:1 quickly:2 central:2 reflect:1 satisfied:1 aaai:3 choose:3 priority:1 geffner:1 stochastically:1 return:11 li:1 segal:1 coefficient:1 race:1 ad:1 performed:4 root:3 wind:1 view:1 optimistic:3 observing:2 reached:5 competitive:2 bayes:5 xa0:1 start:1 contribution:1 publicly:1 nathaniel:1 variance:3 kaufmann:1 efficiently:1 bayesian:13 produced:2 carlo:12 worth:2 confirmed:1 pomdps:1 converged:3 baj:1 reach:1 emilie:1 definition:2 evaluates:1 xiaoping:2 soton:1 propagated:1 sampled:4 newly:1 proved:3 popular:2 recall:2 knowledge:7 reminder:1 anytime:2 car:4 organized:1 routine:1 actually:1 back:1 higher:1 asmuth:1 follow:1 done:2 box:1 execute:1 evaluated:1 furthermore:1 xa:6 uct:31 stage:1 until:4 hand:2 navin:1 bonet:1 google:1 quality:1 gray:1 mdp:14 grows:2 xs0:8 true:2 hence:1 iteratively:2 game:2 width:2 pdf:2 mabs:7 xla:1 bradtke:1 novel:3 recently:2 ari:1 common:2 empirically:1 tively:1 sailing:5 exponentially:1 foreach:2 extend:1 interpretation:1 blocked:1 ai:1 outlined:1 grid:2 similarly:1 lihong:1 moving:1 chapelle:1 add:1 racetrack:6 base:2 navigates:1 posterior:8 patrick:1 recent:1 tesauro:2 success:1 seen:2 fortunately:1 converge:6 ws0:3 shortest:1 reduces:1 exceeds:2 technical:1 faster:3 y:6 a1:1 impact:1 basic:3 orchid:2 expectation:6 pa0:1 iteration:12 achieved:1 cell:1 background:2 uninformative:3 separately:1 szepesv:1 addressed:1 else:5 sailboat:1 source:2 concluded:1 comment:1 simulates:1 call:1 near:2 noting:3 canadian:3 enough:2 variety:1 helmert:1 suboptimal:1 imperfect:1 idea:3 cn:2 regarding:1 peter:1 returned:2 mihalis:1 action:60 repeatedly:1 iterating:1 detailed:1 amount:1 discount:4 ph:3 http:2 aot:1 notice:2 estimated:1 discrete:1 dasgupta:1 rajan:1 visitation:1 key:2 thereafter:2 group:1 asymptotically:1 graph:2 year:1 sum:2 run:3 package:1 prob:1 uncertainty:9 named:3 reasonable:2 wu:3 decision:10 bound:1 hi:3 guaranteed:2 played:1 encountered:2 infinity:2 ri:1 x2:2 flat:1 software:1 emi:1 min:4 performing:4 relatively:1 according:12 combination:5 conjugate:5 anirban:1 making:2 s1:1 alse:1 intuitively:2 explained:2 taken:2 resource:3 equation:1 previously:1 turn:1 count:2 fail:1 needed:1 end:1 adopted:1 available:3 operation:1 rewritten:1 apply:1 observe:2 eight:1 appropriate:1 thomas:1 denotes:2 dirichlet:10 running:4 include:1 assumes:1 exploit:1 gelly:1 especially:1 feng:3 move:3 objective:1 already:1 question:1 font:1 strategy:3 parametric:1 rt:1 said:1 sci:2 simulated:1 mail:1 reason:1 boldface:1 code:2 modeled:5 ratio:1 providing:1 executed:2 implementation:4 policy:32 unknown:11 perform:2 upper:1 observation:4 markov:4 benchmark:6 finite:3 payoff:1 team:1 interacting:1 david:1 pair:2 namely:1 engine:3 qa:2 address:2 beyond:1 usually:1 challenge:2 max:4 belief:4 dearden:1 malte:1 natural:3 indicator:1 representing:1 mdps:15 mcts:54 axis:1 galin:1 hm:1 nir:1 prior:28 review:1 acknowledgement:1 relative:1 asymptotic:1 suggestion:1 versus:1 foundation:1 aijun:1 agent:4 s0:45 propagates:2 principle:2 summary:1 repeat:1 supported:2 free:1 wide:1 munos:1 distributed:7 curve:3 default:2 xn:1 transition:7 cumulative:1 evaluating:1 depth:2 world:1 domainspecific:1 made:1 reinforcement:3 collection:1 avg:2 adaptive:1 historical:1 ec:1 transaction:1 approximate:4 preferred:2 keep:1 confirm:1 assumed:2 conclude:1 consuming:1 xi:3 search:27 zq:1 table:4 ctp:6 terminate:1 robust:1 necessarily:1 complex:2 european:1 domain:18 rue:3 sp:1 main:5 s2:1 whole:1 hyperparameters:1 big:2 child:1 repeated:1 fair:1 allowed:1 x1:2 slow:1 n:1 precision:4 christos:1 qvalue:2 theorem:3 specific:4 navigate:1 x:19 evidence:1 exists:1 budget:2 horizon:12 chen:2 ucb1:3 logarithmic:1 simply:2 indifference:1 springer:1 determines:1 succeed:1 conditional:1 goal:5 acceleration:1 towards:1 absence:1 change:4 included:1 specifically:4 infinite:1 determined:1 called:2 total:5 experimental:5 ew:1 indicating:1 select:7 formally:1 mark:1 ustc:2 constructive:1 evaluate:1 tested:4 correlated:1 |
4,545 | 5,112 | Density estimation from unweighted k-nearest
neighbor graphs: a roadmap
Ulrike von Luxburg
and
Morteza Alamgir
Department of Computer Science
University of Hamburg, Germany
{luxburg,alamgir}@informatik.uni-hamburg.de
Abstract
Consider an unweighted k-nearest neighbor graph on n points that have been sampled i.i.d. from some unknown density p on Rd . We prove how one can estimate
the density p just from the unweighted adjacency matrix of the graph, without
knowing the points themselves or any distance or similarity scores. The key insights are that local differences in link numbers can be used to estimate a local
function of the gradient of p, and that integrating this function along shortest paths
leads to an estimate of the underlying density.
1
Introduction
The problem. Consider an unweighted k-nearest neighbor graph that has been built on a random
sample X1 , ..., Xn from some unknown density p on Rd . Assume we are given the adjacency matrix
of the graph, but we do not know the point locations X1 , ...., Xn or any distance or similarity scores
between the points. Is it then possible to estimate the underlying density p, just from the adjacency
matrix of the unweighted graph?
Why is this problem interesting for machine learning? Machine learning algorithms on graphs
are abundant, ranging from graph clustering methods such as spectral clustering over label propagation methods for semi-supervised learning to dimensionality reduction methods and manifold
algorithms. In the majority of applications, the graphs that are used as input are similarity graphs:
Given a set of abstract ?objects? X1 , ..., Xn we first compute pairwise similarities s(Xi , Xj ) according to some suitable similarity function and then build a k-nearest neighbor graph (kNN graph for
short) based on this similarity function. The intuition is that the edges in the graph encode the local
information given by the similarity function, whereas the graph as a whole reveals global properties
of the data distribution such as cluster properties, high- and low-density regions, or manifold structure. From a computational point of view, kNN graphs are convenient because they lead to a sparse
representation of the data ? even more so when the graph is unweighted. From a statistical point of
view the key question is whether this sparse representation still contains all the relevant information
about the original data, in particular the information about the underlying data distribution. It is easy
to see that for suitably weighted kNN graphs this is the case: the original density can be estimated
from the degrees in the graph. However, it is completely unclear whether the same holds true for
unweighted kNN graphs.
Why is the problem difficult? The naive attempt to estimate the density from vertex degrees
obviously has to fail in unweighted kNN graphs because all vertex degrees are (about) k. Moreover,
unweighted kNN graphs are invariant with respect to rescaling of the underlying distribution by
a constant factor (e.g., the unweighted kNN graph on a sample from the uniform distribution on
[0, 1]2 is indistinguishable from a kNN graph on a sample from the uniform distribution on [0, 2]2 ).
So all we can hope for is an estimate of the density up to some multiplicative constant that cannot
be determined from the kNN graph alone. The main difficulty, however, is that a kNN graph ?looks
1
the same? in every small neighborhood. To see this, consider the case where the underlying density
is continuous, hence approximately constant in small neighborhoods. Then, if n is large and k/n is
small, local neighborhoods in the kNN graph are all going to look like kNN graphs from a uniform
distribution. This intuition raises an important issue. It is impossible to estimate the density in
an unweighted kNN graph by local quantities alone. We somehow have to make use of global
properties if we want to be successful. This makes the problem very different and much harder than
more standard density estimation problems.
Our solution. We show that it is indeed possible to estimate the underlying density from an
unweighted kNN graph. The construction is fairly involved. In a first step we estimate a pointwise
function of the gradient of the density, and in a second step we integrate these estimates along
shortest paths in the graph to end up with an approximation of the log-density. Our estimate works
as long as the kNN graph is reasonably dense (k d+2 /(n2 logd n) ? ?). However, it fails in the
more important sparser regime (e.g., k ? log n). Currently we do not know whether this is due to a
suboptimal proof or whether density estimation is generally impossible in the sparse regime.
2
Notation and assumptions
Underlying space. Let X ? Rd be a compact subset of Rd . Denote by ?X the topological boundary
of X . For ? > 0 define the ?-interior X? := {x ? X d(x, ?X ) ? ?}. We assume that X is ?full
dimensional? in the sense that there exists some ?0 > 0 such that X?0 is non-empty and connected.
By ?d we denote the volume of a d-dimensional unit ball, and by vd the volume of the intersection
of two d-dimensional unit balls whose centers have distance 1.
Density. Let p be a continuously differentiable density on X . We assume that there exist constants
pmin and pmax such that 0 < pmin ? p(x) ? pmax < ? for all x ? X .
Graph. Given an i.i.d. sample Xn := {X1 , ..., Xn } from p, we build a graph Gn = (Vn , En ) with
Vn = Xn . We connect Xi by a directed edge to Xj if Xj is among the k-nearest neighbors of Xi .
The resulting graph is called the directed, unweighted kNN graph (in the following, we will often
drop the words ?directed? and ?unweighted?). By r(x) := rn,k (x) we denote the Euclidean distance
of a point x to its kth nearest neighbor. For any vertex x ? V we define the sets
In(x) := Inn,k (x) := {y ? Xn (y, x) ? En }
(source points of in-links to x)
Out(x) := Outn,k (x) := {y ? Xn (x, y) ? En }
(target points of out-links from x).
To increase readability we often omit the indices n and k. For a finite set S we denote by |S| its
number of elements.
Paths. For a rectifiable path ? : [0, 1] ? X we define its p-weighted length as
Z
Z 1
`p (?) :=
p1/d (x) ds :=
p1/d (?(t))|? 0 (t)| dt
?
0
(recall the notational convention of writing ?ds? in a line integral). For two points x, y ? X we
define their p-weighted distance as Dp (x, y) = inf ? `p (?) where the infimum is taken over all
rectifiable paths ? that connect x to y. As a consequence of the compactness of X , a minimizing
path that realizes Dp always exists (cf. Burago et al., 2001, Section 2.5.2). We call such a path a
Dp -shortest path. Under the given assumptions on p, the Dp -shortest path between any two points
x, y ? X?0 is smooth.
In an unweighted graph, define the length of a path as its number of edges. For two vertices
x, y denote by Dsp (x, y) their shortest path distance in the graph. It has been proved in Alamgir
and von Luxburg (2012) that for unweighted, undirected kNN graphs, (k/(n?d ))1/d Dsp (x, y) ?
Dp (x, y) almost surely as n ? ? and k ? ? appropriately slowly. The proofs extend directly to
the case of directed kNN graphs.
3
Warmup: the 1-dimensional case
To gain some intuition about the problem and its solution, let us consider the 1-dimensional case
X ? R. For any given point x ? Xn we define the following sets:
Left1 (x) := |{y ? Out(x) y < x}|
and
Right1 (x) := |{y ? Out(x) y > x}|.
2
t
en
ng
R
at x
Rp'(x)
D
en
si
ty
Ta
e
o th
ce t
spa
t
n
ge
T an
Rp'(x)
sity
d en
xl
x
Left 1
xr
Left
d
Right1
R
Right
d
Figure 1: Geometric argument (left: 1-dimensional case, right: 2-dimensional case). The difference
Right ? Left is approximately proportional to the volume of the grey-shaded area.
The intuition to estimate the density from the directed kNN graph is the following. Consider a point
x in a region where the density has positive slope. The set Out(x) is approximately symmetric
around x, that is it has the form Out(x) = Xn ? [x ? R, x + R] for some R > 0. When the density
has an increasing slope at x, there tend to be less sample points in [x?R, x] than in [x, x+R], so the
set Right1 (x) tends to contain more sample points than the set Left1 (x). This is the effect we want
to exploit. The difference Right1 (x)?Left1 (x) can be approximated by n?(P ([x, x+R])?P ([x?
R, x])), and by a simple geometric argument one can see that the latter probability is approximately
R2 p0 (x). See Figure 1 (left side) for an illustration. By standard concentration arguments one can
see that if n is large enough and k chosen appropriately, then R ? k/(2np(x)). Plugging these
two things together shows that Right1 (x) ? Left1 (x) ? (k 2 /(4n2 )) ? p0 (x)/p2 , hence gives an
estimate of p0 (x)/p2 (x). But we are not there yet: it is impossible to directly turn an estimate of
p0 (x)/p2 (x) into an estimate of p(x). This is in accordance with the intuition we mentioned above:
one cannot estimate the density by just looking at a local neighborhood of x in the kNN graph.
Here is now the key trick to introduce a global component to the estimate. We fix one data point
X0 that is going to play the role of an anchor point. To estimate the density at a particular data
point Xs , we now sum the estimates p0 (x)/p2 (x) over all data points x that sit between X0 and Xs .
This corresponds to integrating the function p0 (x)/p2 (x) over the interval [X0 , Xs ] with respect to
the underlying density p, which in turn corresponds to integrating the function p0 (x)/p(x) over the
interval [X0 , Xs ] with respect to the standard Lebesgue measure. This latter integral is well known,
its primitive is log p(x). Hence, for each data point Xs we get an estimate of log p(Xs ) ? log p(X0 ).
Then we exponentiate and arrive at an estimate of c ? p(x), where c = 1/p(X0 ) plays the role of an
unknown constant.
4
A hypothetical estimate in the d-dimensional case
We now generalize our approach to the d-dimensional setting. There are two main challenges: First,
we need to replace the integral over all sample points between X0 and Xs by something more general
in Rd . Our idea is to consider an integral along a path between X0 and Xs , specifically along a path
that corresponds to a shortest path in the graph Gn . Second, we need a generalization of the concept
of what are ?left? and ?right? out-links. Our idea is to use the shortest path as reference. For a point
x on the shortest path between X0 and Xs , the ?left points? of x should be the ones that are on or
close to the subpath from X0 to x and ?right points? the ones on or close to the path from x to Xs .
4.1
Gradient estimates based on link differences
Fix a point x on a simple, continuously differentiable path ? and let T (x) be its tangent vector.
Consider h(y) = hw, yi + b with normal
vector w := T (x), where the offset b has been chosen such
that the hyperplane H := {y ? Rd h(y) = 0} goes through x. Define
Leftd (x) := Leftd,n,k (x) := |{x ? Out(x) h(x) ? 0}|
Rightd (x) := Rightd,n,k (x) := |{x ? Out(x) h(x) > 0}|.
3
Out(x)
H
Out(x)
x
Left
d
In(xr )
In(x l )
path ?
path ?
Right
x
r
xl
Left
d
?
Right
?
Figure 2: Definitions of ?left? and ?right?in the d-dimensional case.
See Figure 2 (left side) for an illustration. This definition is a direct generalization of the definition
of Left1 und Right1 in the 1-dimensional case. It is not yet the end of the story, as the quantities
Leftd and Rightd cannot be evaluated based on the kNN graph alone, but it is a good starting point to
develop the necessary proof concepts. In this section we prove the consistency of a density estimate
based on Leftd and Rightd . In Section 5 we will further generalize the definition to our final estimate.
Theorem 1 (Estimate related to the gradient) Let X and p satisfy the assumptions in Section 2.
Let ? be a differentiable, regular, simple path in X?0 and x a sample point on this path. Let T be
the tangent direction of ? at x and p0T (x) the directional derivative of the density p in direction T at
point x. Then, if n ? ?, k ? ?, k/n ? 0, k d+2 /n2 ? ?,
1/d
2n1/d ?d
p0T (x)
a.s.
Rightd,n,k (x) ? Leftd,n,k (x) ??
(d+1)/d
k
p(x)(d+1)/d
If k d+2 /(n2 logd n) ? ? the convergence even holds uniformly over all sample points x ? Xn .
Proof sketch. The key problem in the proof is that the difference Rightd ?Leftd is of a much smaller
order of magnitude than Rightd and Leftd themselves, so controlling the deviations of Rightd ?Leftd
is somewhat tricky. Conditioned on rout (x) =: r, Rightd ? Bin(k, ?r ) and Leftd ? Bin(k, ?l ),
where ?r = P (right half ball)/P (ball) and ?l?analogously (cf. Figure 2). By Hoeffding?s inequality, Rightd ? Leftd ? E(Rightd ? Leftd ) ? ?( k) with high probability. Note that ?l and ?r tend to
be close to 1/2, thus Hoeffding?s inequality is reasonably tight. A simple geometric argument shows
that if the density in a neighborhood of x is linear, then E(Rightd ? Leftd ) = n ? rd ?d /2 ? rp0T (x)
(n times the probability mass of the grey area in Figure 1). A similar argument holds approximately
if the density is just differentiable at x. A standard concentration argument for the out-radius shows
that with high probability, rout (x) can be approximated by (k/(n?d p(x)))1/d . Combining all results
we obtain that with high probability,
1/d
n1/d
2n1/d ?d
p0T (x)
(Right
?
Left
)
=
?
?
.
d
d
k (d+1)/d
p(x)(d+1)/d
k 1/2+1/d
Convergence takes place if the noise term on the right hand side goes to 0 and the ?high probability?
converges to 1, which happens under the conditions on n and k stated in the theorem.
,
4.2
Integrating the gradient estimates along the shortest path
To deal with the integration part, let us recap some standard results about line integrals.
Proposition 2 (Line integral) Let ? : [0, 1] ? Rd be a simple, continuously differentiable path
from x0 = ?(0) to x1 = ?(1) parameterized by arc length. For a point x = ?(t) on the path, denote
by T (x) the tangent vector to ? at x, and by p0T (x) the directional derivative of p in the tangent
direction T . Then
Z 0
pT (x)
ds = log(p(x1 )) ? log(p(x0 )).
? p(x)
4
Proof. We define the vector field
?p/?x1
...
?p/?xd
p0 (x)
1
F : R ? R , x 7?
=
p(x)
p(x)
d
d
!
.
Observe that F is a continuous gradient field with primitive V : Rd ? R, x 7? log(p(x)). Now
consider the line integral of F along ?:
Z
Z 1D
Z 1
D
E
E
1
def
0
p0 (?(t)), ? 0 (t) dt.
(1)
F (x) dx =
F (?(t)), ? (t) dt =
?
0
0 p(?(t))
Note that ? 0 (t) is the tangent vector T (x) of the path ? at point x = ?(t). Hence, the scalar product
hp0 (?(t)), ? 0 (t)i coincides with the directional derivative of p in direction T , so the right hand side
of Equation (1) coincides with the left hand side of the equation in the proposition. On the other
hand, it is well known that the line integral over a gradient field only depends on the starting and
end point of ? and is given by
Z
F (x) dx = V (x1 ) ? V (x0 ).
?
This coincides with the right hand side of the equation in the proposition.
,
Now we consider the finite sample case. The goal is to approximate the integral along the continuous
path ? by a sum along a path ?n in the kNN graph Gn . To achieve this, we need to construct a
sequence of paths ?n in Gn such that ?n converges to some well-defined path ? in the underlying
space and the lengths of ?n in Gn converge to `p (?). To this end, we are going to consider paths ?n
which are shortest paths in the graph.
Adapting the proof of the convergence of shortest paths in unweighted kNN graphs (Alamgir and
von Luxburg, 2012) we can derive the following statement for integrals along shortest paths.
Proposition 3 (Integrating a function along a shortest path) Let X and p satisfy the assumptions
in Section 2. Fix two sample points in X?0 , say X0 and Xs , and let ?n be a shortest path between
X0 and Xs in the kNN graph Gn . Let ? ? X be a path that realizes Dp (X0 , Xs ). Assume that it
is unique and is completely contained in X?0 . Let g : X ? R be a continuous function. Then, as
n ? ?, k 1+? /n ? 0 (for some small ? > 0), k/ log n ? ?,
1/d X
Z
k
?
g(x) ??
g(x)p(x)1/d ds a.s.
n?d
?
x??
n
Note that if g(x)p1/d (x) can be written in the form hF (?(t)), ? 0 (t)i, then the same statement even
holds if the shortest Dp -path is not unique, because the path integral then only depends on start and
end point. This is the case for our particular function of interest, g(x) = p0T (x)/p1+1/d (x).
4.3
Combining everything to obtain a density estimate
Theorem 4 (Density estimate) Let X and p satisfy the assumptions in Section 2, let X0 ? X?0 be
any fixed sample point. For another sample point Xs , let ?n be a shortest path between X0 and Xs
in the kNN graph Gn . Assume that there exists a path ? that realizes Dp (x, y) and that is completely
contained in X?0 . Then, as n ? ?, k ? ?, k/n ? 0, k d+2 /(n2 logd n) ? ?,
2 X
(Rightd,n,k (x) ? Leftd,n,k (x)) ?? log p(Xs ) ? log p(X0 ) a.s.
k x??
n
Proof sketch. By Proposition 2,
Z
log(p(Xs )) ? log(p(X0 )) =
?
p0T (x)
ds =
p(x)
5
Z
?
p0T (x)
p(x)1/d ds.
p(x)(d+1)/d
According to Proposition 3, the latter can be approximated by
1/d X
k
p0T (x)
n?d
p(x)(d+1)/d
x??
n
where ?n is a shortest path between X0 and Xs in the kNN graph. Proposition 1 shows that this
quantity gets estimated by
1/d
2 X
X
k
n1/d
1/d
? 2?d
Rightd (x) ? Leftd (x) =
Rightd (x) ? Leftd (x) .
(d+1)/d
n?d
k x??
k
x??
n
n
,
5
The final d-dimensional density estimate
In this section, we finally introduce an estimate that solely uses quantities available from the kNN
graph. Let x be a vertex on a shortest path ?n,k in the kNN graph Gn . Let xl and xr be the
predecessor and successor vertices of x on this path (in particular, xl and xr are sample points as
well). Define
Left?n,k (x) := | Out(x) ? In(xl )|
and
Right?n,k (x) := | Out(x) ? In(xr )|.
See Figure 2 (right side) for an illustration. On first glance, these sets look quite different from
Leftd and Rightd . But the intuition is that whenever we find two sets on the left and right side of
x that have approximately the same volume, then the difference Left?n,k ? Right?n,k should be a
function of p0T (x). For a second intuition consider the special case d = 1 and recall the definition of
R of Section 3. One can show that in expectation, [x ? R, x] coincides with Out(x) ? In(xl ) and
[x, x + R] with Out(x) ? In(xr ), so in case d = 1 the definitions coincide in expectation with the
ones in Section 3. Another insight is that the set Left?n,k (x) counts the number of directed paths of
length 2 from x to xl , and Right?n,k (x) analogously.
We conjecture that the difference Right?n,k ? Left?n,k can be used as before to construct a density
estimate. Specifically, if ?n,k is a shortest path from the anchor point X0 to Xs , we believe that
under similar conditions on k and n as before,
?d X
Right?n,k (x) ? Left?n,k (x)
(?)
k?d x??
n
is a consistent estimator of the quantity log p(Xs ) ? log p(X0 ). Our simulations in Section 6 show
that the estimate works, even surprisingly well. So far we do not have a formal proof yet, due
to two technical difficulties. The first problem is that the set In(xl ) is not a ball, but an ?eggshaped? set. As n ? ?, one can sandwich In(x) between two concentric balls that converge to
each other, but this approximation is too weak to carry the proof. To compute the expected value
E(Right?n,k (x) ? Left?n,k (x)) we would have to integrate the intersection of the ?egg? In(xl ) with
the ball Out(x), and so far we have no closed form solution. The second difficulty is related to the
shortest path in the graph. While it is clear that ?most edges? in this path have approximately the
maximal length (that is, (k/(n?d p(x))1/d for an edge in the neighborhood of x), this is not true for
all edges. Intuitively it is clear that the contribution of the few violating edges will be washed out in
the integral along the shortest path, but we don?t have a formal proof yet.
What we can prove is the following weaker version. Consider a Dp -shortest path ? ? Rd and a point
x on this path with out-radius rout (x). Define the points xl and xr as the two points where the path ?
enters resp. leaves the ball B(x, rout (x)), and define the sets Ln,k := Out(x) ? B(xl , rout (x)) and
1/d
1/d P
Rn,k := Out(x) ? B(xr , rout (x)). Then it can be proved that (?d )/(k?d ) x??n Rn,k (x) ?
Ln,k (x) ? log p(Xs ) ? log p(X0 ). The proof is similar to the one in Section 4 . It circumvents the
problems mentioned above by using well defined balls instead of In-sets and the continuous path ?
rather than the finite sample shortest path ?n , but the quantities cannot be estimated from the kNN
graph alone.
6
6
Simulations
As a proof of concept, we ran simple experiments to evaluate the behavior of estimator (?). We
draw n = 2000 points according to a couple of simple densities on R, R2 and R10 , then we build
the directed, unweighted kNN graph with k = 50. We fix a random point as anchor point X0 ,
compute the quantities Right?n,k and Left?n,k for all sample points, and then sum the differences
Right?n,k ? Left?n,k along shortest paths to X0 . Rescaling by the constant ?n /(kvd ) and exponentiating then leads to our estimate for p(x)/p(X0 ). In order to nicely plot our results, we multiply
the resulting estimate by p(X0 ) to get rid of the scaling constant (this step would not be possible in
applications, but it merely serves for illustration purposes). The results are shown in Figure 3. It is
obvious from these figures that our estimate ?works?, even surprisingly well (note that the sample
size is not very large and we did not perform any parameter tuning). Even in the case of a step
function the estimate recovers the structure of the density. Note that this is a particularly difficult
case in our setting, because within the constant parts of the two steps, the kNN graphs of the left and
right step are indistinguishable. It is only in a small strip around the boundary between the two steps
that kNN graph will reveal non-uniform behavior. The simulations show that this is already enough
to reveal the overall structure of the step function.
7
Extensions
We have seen how to estimate the density in an unweighted, directed kNN graph. It is even possible
to extend this result to more general cases. Here is a sketch of the main ideas.
Estimating the dimension from the graph. The current density estimate requires that we know
the dimension d of the underlying space because we need to be able to compute the constants ?d
(volume of the unit ball) and vd (intersection of two unit balls). The dimension can be estimated
from the directed, unweighted kNN graph as follows. Denote by r the distance of x to its kth-nearest
neighbor, and by K the number of vertices that can be reached from x by a directed shortest path
of length 2. Then k/n ? P (B(x, r)) and K/n ? P (B(x, 2r)). If n is large enough and k small
enough, the density on these balls is approximately constant, which implies K/k ? 2d where d is
the dimension of the underlying space.
Recovering the directed graph from the undirected one. The current estimate is based on the
directed kNN graph, but many applications use undirected kNN graphs. However, it is possible to
recover the directed, unweighted kNN graph from the undirected, unweighted graph. Denote by
N (x) the vertices that have an undirected edge to x. If n is large and k small, then for any two
vertices x and y we can approximate |N (x) ? N (y)|/n ? P (B(x, r) ? B(y, r)). The latter is
monotonously decreasing with kx ? yk. To estimate the set Out(x) in order to recover the directed
kNN graph, we rank all points y ? N (x) according to |N (x) ? N (y)| and choose Out(x) as the
first k vertices in this ranking.
Point embedding. In this paper we focus on estimating the density from the unweighted kNN graph.
Another interesting problem is to recover an embedding of the vertices to Rd such that the kNN graph
based on the embedded vertices corresponds to the given kNN graph. This problem is closely related
to a classic problem in statistics, namely non-metric multidimensional scaling (Shepard, 1966, Borg
and Groenen, 2005), and more specifically to learning distances and embeddings from ranking and
comparison data (Schultz and Joachims, 2004, Agarwal et al., 2007, Ouyang and Gray, 2008, McFee
and Lanckriet, 2009, Shaw and Jebara, 2009, Shaw et al., 2011, Jamieson and Nowak, 2011) as well
as to ordinal (monotone) embeddings (Bilu and Linial, 2005, Alon et al., 2008, B?adoiu et al., 2008,
Gutin et al., 2009). However, we are not aware of any approach in the literature that can faithfully
embed unweighted kNN graphs and comes with performance guarantees. Based on our density
estimate, such an embedding can now easily be constructed. Given the unweighted kNN graph,
we assign edge weights w(Xi , Xj ) = (?
p?1/d (Xi ) + p??1/d (Xj ))/2 where p? is the estimate of the
underlying density. Then the shortest paths in this weighted kNN graph converge to the Euclidean
distances in the underlying space, and standard metric multidimensional scaling can be used to
construct an appropriate embedding. In the limit of n ? ?, this approach is going to recover the
original point embedding up to similarity transformations (translation, rotation or rescaling).
7
density, n=2000, k=50, dim=1
density, n=2000, k=50, dim=2
1
density, n=2000, k=50, dim=10
1.4
2.5
1.2
0.8
2
1
0.6
0.8
1.5
0.4
0.6
1
0.4
0.2
0.5
0.2
0
?3
?2
?1
0
1
2
3
log(p) true, n = 2000, k=50, dim=2
0
?3
?2
?1
0
1
2
0
?4
3
?2
0
2
4
density, n=2000, k=50, dim=2
log(p) estimated, n = 2000, k=50, dim=2
0.2
3
3
2
2
1
1
0
0
?1
?1
?2
?2
?2
0
2
log(p) true, n = 2000, k=50, dim=2
0.15
0.1
0.05
?2
0
2
0
?4
?2
0
2
4
density, n=2000, k=50, dim=2
log(p) estimated, n = 2000, k=50, dim=2
1.5
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
1
0.5
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
0
0
0.2
0.4
0.6
0.8
1
Figure 3: Densities and their estimates. Density model in the first row: the first dimension is sampled
from a mixture of Gaussians, the other dimensions from a uniform distribution. The figures plot the
first dimension of the data points versus the true (black) and estimated (green) density values. From
left to right, they show the case of 1, 2, and 10 dimensions, respectively. Second and third row:
2-dimensional densities. The left plots show the true log-density (a Gaussian and a step function),
the middle plots show the estimated log-density. The right figures plot the first coordinate of the
data points against the true (black) and estimated (green) density values. The black star in the left
plot depicts the anchor point X0 of the integration step.
8
Conclusions
In this paper we show how a density can be estimated from the adjacency matrix of an unweighted,
directed kNN graph, provided the graph is dense enough (k d+2 /(n2 logd n) ? ?). In this case, the
information about the underlying density is implicitly contained in unweighted kNN graphs, and,
at least in principle, accessible by machine learning algorithms. However, in most applications, k
is chosen much, much smaller, typically on the order k ? log(n). For such sparse graphs, our
density estimate fails because it is dominated by sampling noise that does not disappear as n ? ?.
This raises the question whether this failure is just an artifact of our particular construction or of
our proof, or whether a similar phenomenon is true more generally. If yes, then machine learning
algorithms on sparse unweighted kNN graphs would be highly problematic: If the information about
the underlying density is not present in the graph, it is hard to imagine how machine learning algorithms (for example, spectral clustering) could still be statistically consistent. General lower bounds
proving or disproving these speculations are an interesting open problem.
Acknowledgements
We would like to thank Gabor Lugosi for help with the proof of Theorem 1. This research was
partly supported by the German Research Foundation (grant LU1718/1-1 and Research Unit 1735
?Structural Inference in Statistics: Adaptation and Efficiency?).
8
References
S. Agarwal, J. Wills, L. Cayton, G. Lanckriet, D. Kriegman, and S. Belongie. Generalized nonmetric multidimensional scaling. In AISTATS, 2007.
M. Alamgir and U. von Luxburg. Shortest path distance in random k-nearest neighbor graphs. In
International Conference on Machine Learning (ICML), 2012.
N. Alon, M. B?adoiu, E. Demaine, M. Farach-Colton, M. Hajiaghayi, and A. Sidiropoulos. Ordinal
embeddings of minimum relaxation: general properties, trees, and ultrametrics. ACM Transactions on Algorithms, 4(4):46, 2008.
M. B?adoiu, E. Demaine, M. Hajiaghayi, A. Sidiropoulos, and M. Zadimoghaddam. Ordinal embedding: approximation algorithms and dimensionality reduction. In Approximation, Randomization
and Combinatorial Optimization. Algorithms and Techniques. Springer, 2008.
Y. Bilu and N. Linial. Monotone maps, sphericity and bounded second eigenvalue. Journal of
Combinatorial Theory, Series B, 95(2):283?299, 2005.
I. Borg and P. Groenen. Modern multidimensional scaling: Theory and applications. Springer,
2005.
D. Burago, Y. Burago, and S. Ivanov. A course in metric geometry. American Mathematical Society,
2001.
G. Gutin, E. Kim, M. Mnich, and A. Yeo. Ordinal embedding relaxations parameterized above tight
lower bound. arXiv preprint arXiv:0907.5427, 2009.
K. Jamieson and R. Nowak. Low-dimensional embedding using adaptively selected ordinal data. In
Conference on Communication, Control, and Computing, pages 1077?1084, 2011.
B. McFee and G. Lanckriet. Partial order embedding with multiple kernels. In International Conference on Machine Learning (ICML), 2009.
H. Ouyang and A. Gray. Learning dissimilarities by ranking: from SDP to QP. In International
Conference on Machine Learning (ICML), pages 728?735, 2008.
M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. In Neural
Information Processing Systems (NIPS), 2004.
B. Shaw and T. Jebara. Structure preserving embedding. In International Conference on Machine
Learning (ICML), 2009.
B. Shaw, B. Huang, and T. Jebara. Learning a distance metric from a network. Neural Information
Processing Systems (NIPS), 2011.
R. Shepard. Metric structures in ordinal data. Journal of Mathematical Psychology, 3(2):287?315,
1966.
9
| 5112 |@word middle:1 version:1 suitably:1 open:1 grey:2 simulation:3 p0:9 harder:1 carry:1 reduction:2 contains:1 score:2 series:1 current:2 si:1 yet:4 dx:2 written:1 drop:1 plot:6 alone:4 half:1 leaf:1 selected:1 short:1 location:1 readability:1 warmup:1 mathematical:2 along:12 constructed:1 direct:1 predecessor:1 borg:2 prove:3 introduce:2 x0:29 pairwise:1 expected:1 indeed:1 behavior:2 themselves:2 p1:4 sdp:1 decreasing:1 ivanov:1 increasing:1 provided:1 estimating:2 underlying:15 moreover:1 notation:1 mass:1 bounded:1 what:2 ouyang:2 transformation:1 guarantee:1 every:1 hypothetical:1 multidimensional:4 hajiaghayi:2 xd:1 tricky:1 control:1 unit:5 grant:1 omit:1 jamieson:2 positive:1 before:2 local:6 accordance:1 tends:1 limit:1 consequence:1 path:58 solely:1 approximately:8 lugosi:1 black:3 shaded:1 statistically:1 directed:15 unique:2 xr:8 mcfee:2 area:2 adapting:1 gabor:1 convenient:1 word:1 integrating:5 regular:1 get:3 cannot:4 interior:1 close:3 impossible:3 writing:1 map:1 center:1 primitive:2 go:2 starting:2 insight:2 estimator:2 embedding:10 classic:1 proving:1 coordinate:1 alamgir:5 resp:1 construction:2 target:1 play:2 controlling:1 pt:1 imagine:1 us:1 lanckriet:3 trick:1 element:1 approximated:3 particularly:1 role:2 preprint:1 enters:1 region:2 connected:1 ran:1 mentioned:2 intuition:7 und:1 yk:1 kriegman:1 raise:2 tight:2 linial:2 efficiency:1 completely:3 easily:1 neighborhood:6 whose:1 quite:1 say:1 statistic:2 knn:47 final:2 obviously:1 sequence:1 differentiable:5 eigenvalue:1 inn:1 product:1 maximal:1 adaptation:1 relevant:1 combining:2 achieve:1 convergence:3 cluster:1 empty:1 converges:2 sity:1 object:1 help:1 derive:1 develop:1 alon:2 nearest:8 p2:5 recovering:1 implies:1 come:1 convention:1 direction:4 radius:2 closely:1 successor:1 everything:1 adjacency:4 bin:2 assign:1 fix:4 generalization:2 randomization:1 proposition:7 extension:1 hold:4 around:2 recap:1 normal:1 purpose:1 estimation:3 realizes:3 label:1 currently:1 combinatorial:2 faithfully:1 weighted:4 hope:1 sphericity:1 always:1 gaussian:1 rather:1 encode:1 focus:1 dsp:2 notational:1 joachim:2 rank:1 kim:1 sense:1 dim:9 inference:1 typically:1 compactness:1 going:4 germany:1 issue:1 among:1 overall:1 groenen:2 integration:2 fairly:1 special:1 field:3 construct:3 aware:1 nicely:1 ng:1 sampling:1 look:3 icml:4 np:1 few:1 modern:1 geometry:1 lebesgue:1 n1:4 sandwich:1 attempt:1 interest:1 highly:1 multiply:1 mixture:1 edge:9 integral:12 nowak:2 necessary:1 partial:1 tree:1 euclidean:2 abundant:1 adoiu:3 gn:8 vertex:12 subset:1 deviation:1 uniform:5 successful:1 too:1 monotonously:1 connect:2 adaptively:1 density:57 international:4 accessible:1 together:1 continuously:3 analogously:2 von:4 choose:1 slowly:1 hoeffding:2 huang:1 american:1 derivative:3 rescaling:3 pmin:2 yeo:1 de:1 star:1 satisfy:3 ultrametrics:1 ranking:3 depends:2 multiplicative:1 view:2 closed:1 ulrike:1 start:1 hf:1 reached:1 recover:4 slope:2 contribution:1 farach:1 directional:3 yes:1 generalize:2 weak:1 informatik:1 bilu:2 whenever:1 strip:1 definition:6 against:1 ty:1 failure:1 involved:1 obvious:1 proof:15 recovers:1 couple:1 sampled:2 gain:1 proved:2 recall:2 dimensionality:2 nonmetric:1 ta:1 dt:3 supervised:1 violating:1 gutin:2 evaluated:1 cayton:1 just:5 d:6 sketch:3 hand:5 propagation:1 glance:1 somehow:1 infimum:1 artifact:1 reveal:2 gray:2 believe:1 effect:1 rectifiable:2 contain:1 true:8 concept:3 hence:4 symmetric:1 deal:1 indistinguishable:2 coincides:4 generalized:1 exponentiate:1 logd:4 ranging:1 rotation:1 qp:1 shepard:2 volume:5 extend:2 sidiropoulos:2 rd:11 tuning:1 consistency:1 similarity:8 something:1 zadimoghaddam:1 inf:1 hamburg:2 inequality:2 yi:1 seen:1 minimum:1 preserving:1 somewhat:1 surely:1 converge:3 shortest:27 semi:1 full:1 multiple:1 smooth:1 technical:1 long:1 plugging:1 expectation:2 metric:6 arxiv:2 kernel:1 agarwal:2 whereas:1 want:2 interval:2 source:1 appropriately:2 tend:2 undirected:5 thing:1 call:1 structural:1 easy:1 enough:5 embeddings:3 xj:5 psychology:1 suboptimal:1 p0t:9 idea:3 knowing:1 whether:6 generally:2 clear:2 exist:1 problematic:1 estimated:10 key:4 ce:1 r10:1 graph:78 relaxation:2 merely:1 monotone:2 sum:3 rout:6 luxburg:5 parameterized:2 arrive:1 almost:1 place:1 vn:2 circumvents:1 draw:1 scaling:5 spa:1 def:1 bound:2 topological:1 dominated:1 argument:6 conjecture:1 department:1 according:4 ball:12 smaller:2 happens:1 intuitively:1 invariant:1 taken:1 ln:2 equation:3 turn:2 count:1 fail:1 german:1 know:3 ordinal:6 ge:1 end:5 serf:1 available:1 gaussians:1 observe:1 spectral:2 appropriate:1 shaw:4 rp:2 original:3 clustering:3 cf:2 exploit:1 build:3 disappear:1 society:1 question:2 quantity:7 already:1 concentration:2 unclear:1 gradient:7 kth:2 dp:9 distance:12 link:5 thank:1 majority:1 vd:2 roadmap:1 manifold:2 length:7 pointwise:1 index:1 illustration:4 minimizing:1 difficult:2 statement:2 stated:1 pmax:2 unknown:3 perform:1 arc:1 finite:3 looking:1 communication:1 rn:3 jebara:3 concentric:1 namely:1 speculation:1 subpath:1 nip:2 able:1 regime:2 challenge:1 built:1 green:2 suitable:1 difficulty:3 washed:1 naive:1 geometric:3 literature:1 tangent:5 acknowledgement:1 relative:1 embedded:1 interesting:3 proportional:1 demaine:2 versus:1 disproving:1 integrate:2 foundation:1 degree:3 consistent:2 principle:1 story:1 translation:1 row:2 course:1 surprisingly:2 supported:1 side:8 formal:2 weaker:1 neighbor:8 sparse:5 boundary:2 dimension:8 xn:11 unweighted:28 coincide:1 exponentiating:1 schultz:2 far:2 transaction:1 approximate:2 compact:1 uni:1 implicitly:1 global:3 colton:1 reveals:1 anchor:4 rid:1 belongie:1 xi:5 don:1 continuous:5 why:2 reasonably:2 did:1 aistats:1 main:3 dense:2 whole:1 noise:2 n2:6 x1:8 en:6 egg:1 depicts:1 fails:2 xl:11 third:1 hw:1 theorem:4 embed:1 r2:2 x:21 offset:1 sit:1 exists:3 magnitude:1 dissimilarity:1 conditioned:1 kx:1 sparser:1 morteza:1 intersection:3 contained:3 scalar:1 springer:2 corresponds:4 acm:1 goal:1 replace:1 hard:1 determined:1 specifically:3 uniformly:1 hyperplane:1 called:1 partly:1 right1:6 latter:4 evaluate:1 phenomenon:1 |
4,546 | 5,113 | Sketching Structured Matrices for
Faster Nonlinear Regression
David P. Woodruff
IBM Almaden Research Center
San Jose, CA 95120
[email protected]
Haim Avron
Vikas Sindhwani
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
{haimav,vsindhw}@us.ibm.com
Abstract
Motivated by the desire to extend fast randomized techniques to nonlinear lp regression, we consider a class of structured regression problems. These problems
involve Vandermonde matrices which arise naturally in various statistical modeling settings, including classical polynomial fitting problems, additive models and
approximations to recently developed randomized techniques for scalable kernel
methods. We show that this structure can be exploited to further accelerate the
solution of the regression problem, achieving running times that are faster than
?input sparsity?. We present empirical results confirming both the practical value
of our modeling framework, as well as speedup benefits of randomized regression.
1
Introduction
Recent literature has advocated the use of randomization as a key algorithmic device with which
to dramatically accelerate statistical learning with lp regression or low-rank matrix approximation
techniques [12, 6, 8, 10]. Consider the following class of regression problems,
arg min kZx ? bkp , where p = 1, 2
x?C
(1)
where C is a convex constraint set, Z ? Rn?k is a sample-by-feature design matrix, and b ? Rn
is the target vector. We assume henceforth that the number of samples is large relative to data
dimensionality (n k). The setting p = 2 corresponds to classical least squares regression, while
p = 1 leads to least absolute deviations fit, which is of significant interest due to its robustness
properties. The constraint set C can incorporate regularization. When C = Rk and p = 2, an optimal solution can be obtained in time O(nk log k) + poly(k ?1 ) using randomization [6, 19],
which is much faster than an O(nk 2 ) deterministic solver when is not too small (dependence on
can be improved to O(log(1/)) if higher accuracy is needed [17]). Similarly, a randomized solver
for l1 regression runs in time O(nk log n) + poly(k ?1 ) [5].
In many settings, what makes such acceleration possible is the existence of a suitable oblivious
subspace embedding (OSE). An OSE can be thought of as a data-independent random ?sketching?
matrix S ? Rt?n whose approximate isometry properties over a subspace (e.g., over the column
space of Z, b) imply that,
kS(Zx ? b)kp ? kZx ? bkp for all x ? C ,
which in turn allows x to be optimized over a ?sketched? dataset of much smaller size without losing
solution quality. Sketching matrices include Gaussian random matrices, structured random matrices
which admit fast matrix multiplication via FFT-like operations, and others.
This paper is motivated by two questions which in our context turn out to be complimentary:
1
? Can additional structure in Z be non-trivially exploited to further accelerate runtime? Clarkson
and Woodruff have recently shown that when Z is sparse and has nnz(Z) nk non-zeros, it
is possible to achieve much faster ?input-sparsity? runtime using hashing-based sketching matrices [7]. Is it possible to further beat this time in the presence of additional structure on Z?
? Can faster and more accurate sketching techniques be designed for nonlinear and nonparametric
regression? To see that this is intertwined
Pq with the previous question, consider the basic problem
of fitting a polynomial model, b = i=1 ?i z i to a set of samples (zi , bi ) ? R ? R, i = 1, . . . , n.
Then, the design matrix Z has Vandermonde structure which can potentially be exploited in a
regression solver. It is particularly appealing to estimate non-parametric models on large datasets.
Sketching algorithms have recently been explored in the context of kernel methods for nonparametric function estimation [16, 11].
To be able to precisely describe the structure on Z that we consider in this paper, and outline our
contributions, we need the following definitions.
Definition 1 (Vandermonde Matrix) Let x0 , x1 , . . . , xn?1 be real numbers. The Vandermonde matrix, denoted Vq,n (x0 , x1 , . . . , xn?1 ), has the form:
?
?
1
1
...
1
x1
. . . xn?1 ?
? x0
Vq,n (x1 , x1 , . . . , xn?1 ) = ? . . .
... ... ... ?
xq?1
xq?1
. . . xq?1
0
1
n?1
Vandermonde matrices of dimension q ? n require only O(n) implicit storage and admit O((n +
q) log2 q) matrix-vector multiplication time. We also define the following matrix operator Tq which
maps a matrix A to a block-Vandermonde structured matrix.
Definition 2 (Matrix Operator) Given a matrix A ? Rn?d , we define the following matrix:
T
T
T
Tq (A) = Vq,n (A1,1 , . . . , An,1 )
Vq,n (A1,2 , . . . , An,2 )
? ? ? Vq,n (A1,d , . . . , An,d )
In this paper, we consider regression problems, Eqn. 1, where Z can be written as
Z = Tq (A)
(2)
for an n ? d matrix A, so that k = dq. The operator Tq expands each feature (column) of the
original dataset A to q columns of Z by applying monomial transformations upto degree q ? 1. This
lends a block-Vandermonde structure to Z. Such structure naturally arises in polynomial regression
problems, but also applies more broadly to non-parametric additive models and kernel methods as
we discuss below. With this setup, the goal is to solve the following problem:
Structured Regression: Given A and b, with constant probability output a vector x0 ? C for which
kTq (A)x0 ? bkp ? (1 + ?)kTq (A)x? ? bkp ,
for an accuracy parameter ? > 0, where x? = arg minx?C kTq (A)x ? bkp .
Our contributions in this paper are as follows:
? For p = 2, we provide an algorithm that solves the structured regression problem above in time
O(nnz(A) log2 q) + poly(dq?1 ). By combining our sketching methods with preconditioned
iterative solvers, we can also obtain logarithmic dependence on . For p = 1, we provide an
algorithm with runtime O(nnz(A) log n log2 q) + poly(dq?1 log n). This implies that moving
from linear (i.e, Z = A) to nonlinear regression (Z = Tq (A))) incurs only a mild additional log2 q
runtime cost, while requiring no extra storage! Since nnz(Tq (A)) = q nnz(A), this provides - to
our knowledge - the first sketching approach that operates faster than ?input-sparsity? time, i.e.
we sketch Tq (A) in time faster than nnz(Tq (A)).
? Our algorithms apply to a broad class of nonlinear models for both least squares regression and
their robust l1 regression counterparts. While polynomial regression and additive models with
monomial basis functions are immediately covered by our methods, we also show that under a
suitable choice of the constraint set C, the structured regression problem with Z = Tq (AG) for
a Gaussian random matrix G approximates non-parametric regression using the Gaussian kernel.
We argue that our approach provides a more flexible modeling framework when compared to
randomized Fourier maps for kernel methods [16, 11].
2
? Empirical results confirm both the practical value of our modeling framework, as well as speedup
benefits of sketching.
2
Polynomial Fitting, Additive Models and Random Fourier Maps
Our primary goal in this section is to motivate sketching approaches for a versatile class of BlockVandermonde structured regression problems by showing that these problems arise naturally in various statistical modeling settings.
The most basic application is the one-dimensional (d = 1) polynomial regression.
In multivariate additive regression models, a continuous target variable y ? R and input variables
Pd
z ? Rd are related through the model y = ? + i=1 fi (zi ) + i where ? is an intercept term,
i are zero-mean Gaussian error terms
Pqand fi are smooth univariate functions. The basic idea is
to expand each function as fi (?) = t=1 ?i,t hi,t (?) using basis functions hi,t (?) and estimate the
unknown parameter vector x = [?11 . . . ?1q . . . ? dq ]T typically by a constrained or penalized least
squares model, argminx?C kZx ? bk22 where b = (y1 . . . yn )T and Z = [H1 . . . Hq ] ? Rn?dq for
(Hi )j,t = hi,t (zj ) on a training sample (zi , yi ), i = 1 . . . n. The constraint set C typically imposes
smoothing, sparsity or group sparsity constraints [2]. It is easy to see that choosing a monomial basis
hi,s (u) = us immediately maps the design matrix Z to the structured regression form of Eqn. 2.
For p = 1, our algorithms also provide fast solvers for robust polynomial additive models.
Additive models impose a restricted form of univariate nonlinearity which ignores interactions
betweenPcovariates. Let us denote an interaction term as z ? = z1?1 . . . zd?d , ? = (?1 . . . ?d )
where i ?i = q, ?i ? {0 . . . q}.
P A degree-q multivariate polynomial function space Pq is
spanned by {z ? , ? ? {0, . . . q}d , i ?i ? q}. Pq admits all possible degree-q interactions
but has dimensionality dq which is computationally infeasible to explicitly work with except for
low-degrees and low-dimensional
or sparse datasets [3]. Kernel methods with polynomial kernels
q
P
k(z, z 0 ) = z T z 0 = ? z ? z 0? provide an implicit mechanism to compute inner products in the
feature space associated with Pq . However, they require O(n3 ) computation for solving associated
kernelized (ridge) regression problems and O(n2 ) storage of dense n ? n Gram matrices K (given
by Kij = k(zi , zj )), and therefore do not scale well.
For a d ? D matrix G let SG be the subspace spanned by
?
?
!t
d
? X
?
Gij zi , t = 1 . . . q, j = 1 . . . s .
?
?
i=1
Assuming D = dq and that G is a random matrix of i.i.d Gaussian variables, then almost surely we
have SG = Pq . An intuitively appealing explicit scalable approach is then to use D dq . In that
case SG essentially spans a random subspace of Pq . The design matrix for solving the multivariate
polynomial regression restricted to SG has the form Z = Tq (AG) where A = [z1T . . . znT ]T .
This scheme can be in fact related to the idea of random Fourier features introduced by Rahimi
and Recht [16] in the context of approximating shift-invariant kernel functions, with the Gaussian
Kernel k(z, z 0 ) = exp (?kz ? z 0 k22 /2? 2 ) as the primary example. By appealing to Bochner?s Theorem [18], it is shown that the Gaussian kernel is the Fourier transform of a zero-mean multivariate
Gaussian distribution with covariance matrix ? ?1 Id where Id denotes the d-dimensional identity
matrix,
k(z, z 0 ) = exp (?kz ? z 0 k22 /2? 2 ) = E??N (0d ,??1 Id ) [?? (z)?? (z 0 )? ]
0
where ?? (z) = ei? z . An empirical approximation to this expectation can be obtained by sampling
PD
1
?
D frequencies ? ? N (0d , ? ?1 Id ) and setting k(z, z 0 ) = D
i=1 ??i (z)??i (z) . This implies
2
2
that the Gram matrix of the Gaussian kernel, Kij = exp (?kzi ? zj k2 /2? ) may be approximated
with high concentration as K ? RRT where R = [cos(AG) sin(AG)] ? Rn?2D (sine and cosine are applied elementwise as scalar functions). This randomized explicit feature mapping for
the Gaussian kernel implies that standard linear regression, with R as the design matrix, can then
be used to obtain a solution in time O(nD2 ). By taking the Maclaurin series expansion of sine
and cosine upto degree q, we can see that a restricted structured regression problem of the form,
3
argminx?range(Q) kTq (AG)x ? bkp , where the matrix Q ? R2Dq?2D contains appropriate coefficients of the Maclaurin series, will closely approximate the randomized Fourier features construction
of [16]. By dropping or modifying the constraint set x ? range(Q), the setup above, in principle,
can define a richer class of models. A full error analysis of this approach is the subject of a separate
paper.
3
Fast Structured Regression with Sketching
We now develop our randomized solvers for block-Vandermonde structured lp regression problems.
In the theoretical developments below, we consider unconstrained regression though our results
generalize straightforwardly to convex constraint sets C. For simplicity, we state all our results
for constant failure probability. One can always repeat the regression procedure O(log(1/?)) times,
each time with independent randomness, and choose the best solution found. This reduces the failure
probability to ?.
3.1
Background
We begin by giving some notation and then provide necessary technical background.
Given a matrix M ? Rn?d , let M1 , . . . , Md be the columns of M , and M 1 , . .P
. , M n be the rows
of M . Define kM k1 to be the element-wise `1 norm of M . That is, kM k1 = i?[d] kMi k1 . Let
P
1/2
2
kM kF =
M
be the Frobenius norm of M . Let [n] = {1, . . . , n}.
i,j
i?[n],j?[d]
3.1.1
Well-Conditioning and Sampling of A Matrix
Definition 3 ((?, ?, 1)-well-conditioning [8]) Given a matrix M ? Rn?d , we say M is (?, ?, 1)well-conditioned if (1) kxk? ? ? kM xk1 for any x ? Rd , and (2) kM k1 ? ?.
Lemma 4 (Implicit in [20]) Suppose S is an r ? n matrix so that for all x ? Rd ,
kM xk1 ? kSM xk1 ? ?kM xk1 .
Let Q ? R be a QR-decomposition
of SM , so that QR = SM and Q has orthonormal columns.
?
Then M R?1 is (d r, ?, 1)-well-conditioned.
Theorem 5 (Theorem 3.2 of [8]) SupposeU is an (?, ?, 1)-well-conditioned basis of an n ? d
kUi k1
12
2
2
matrix A. For each i ? [n], let pi ? min 1, tkU
k1 , where t ? 32??(d ln ? + ln ? )/(? ).
Suppose we independently sample each row with probability pi , and create a diagonal matrix S
where Si,i = 0 if i is not sampled, and Si,i = 1/pi if i is sampled. Then with probability at least
1 ? ?, simultaneously for all x ? Rd we have:
|kSAxk1 ? kAxk1 | ? ?kAxk1 .
We also need the following method of quickly obtaining approximations to the pi ?s in Theorem 5,
which was originally given in Mahoney et al. [13].
Theorem 6 Let U ? Rn?d be an (?, ?, 1)-well-conditioned basis
d matrix A. Suppose
of an n ?
kUi Gk1
G is a d ? O(log n) matrix of i.i.d. Gaussians. Let pi = min 1, t2?
dkU Gk
for all i, where t is
1
as in Theorem 5. Then with probability 1 ? 1/n, over the choice of G, the following occurs. If we
sample each row with probability pi , and create S as in Theorem 5, then with probability at least
1 ? ?, over our choice of sampled rows, simultaneously for all x ? Rd we have:
|kSAxk1 ? kAxk1 | ? ?kAxk1 .
3.1.2
Oblivious Subspace Embeddings
Let A ? Rn?d . We assume that n > d. Let nnz(A) denote the number of non-zero entries of A.
We can assume nnz(A) ? n and that there are no all-zero rows or columns in A.
4
`2 Norm The following family of matrices is due to Charikar et al. [4] (see also [9]): For a parameter t, define a random linear map ?D : Rn ? Rt as follows:
? h : [n] 7? [t] is a random map so that for each i ? [n], h(i) = t0 for t0 ? [t] with probability 1/t.
? ? ? {0, 1}t?n is a t ? n binary matrix with ?h(i),i = 1, and all remaining entries 0.
? D is an n ? n random diagonal matrix, with each diagonal entry independently chosen to be +1
or ?1 with equal probability.
We will refer to ? = ?D as a sparse embedding matrix.
For certain t, it was recently shown [7] that with probability at least .99 over the choice of ? and D,
for any fixed A ? Rn?d , we have simultaneously for all x ? Rd ,
(1 ? ?) ? kAxk2 ? k?Axk2 ? (1 + ?) ? kAxk2 ,
that is, the entire column space of A is preserved [7]. The best known value of t is t = O(d2 /?2 )
[14, 15] .
We will also use an oblivious subspace embedding known as the subsampled randomized Hadamard
transform, or SRHT. See Boutsidis and Gittens?s recent article for a state-the-art analysis [1].
Theorem 7 (Lemma 6 in [1]) There is a distribution over linear maps ?0 such that with probability
.99 over the choice of ?0 , for any fixed A ? Rn?d , we have simultaneously for all x ? Rd ,
(1 ? ?) ? kAxk2 ? k?0 Axk2 ? (1 + ?) ? kAxk2 ,
?
?
where the number of rows of ?0 is t0 = O(??2 (log d)( d + log n)2 ), and the time to compute
?0 A is O(nd log t0 ).
`1 Norm The results can be generalized to subspace embeddings with respect to the `1 -norm
[7, 14, 21]. The best known bounds are due to Woodruff and Zhang [21], so we use their family of
embedding matrices in what follows. Here the goal is to design a distribution over matrices ?, so
that with probability at least .99, for any fixed A ? Rn?d , simultaneously for all x ? Rd ,
kAxk1 ? k?Axk1 ? ? kAxk1 ,
where ? > 1 is a distortion parameter. The best known value of ?, independent of n, for which
?A can be computed in O(nnz(A)) time is ? = O(d2 log2 d) [21]. Their family of matrices ? is
chosen to be of the form ? ? E, where ? is as above with parameter t = d1+? for arbitrarily small
constant ? > 0, and E is a diagonal matrix with Ei,i = 1/ui , where u1 , . . . , un are independent
standard exponentially distributed random variables.
Recall that an exponential distribution has support x ? [0, ?), probability density function (PDF)
f (x) = e?x and cumulative distribution function (CDF) F (x) = 1?e?x . We say a random variable
X is exponential if X is chosen from the exponential distribution.
3.1.3
Fast Vandermonde Multipication
Lemma 8 Let x0 , . . . , xn?1 ? R and V = Vq,n (x0 , . . . , xn?1 ). For any y ? Rn and z ? Rq , the
matrix-vector products V y and V T z can be computed in O((n + q) log2 q) time.
3.2
Main Lemmas
We handle `2 and `1 separately. Our algorithms uses the subroutines given by the next lemmas.
Lemma 9 (Efficient Multiplication of a Sparse Sketch and Tq (A)) Let A ? Rn?d . Let ? = ?D
be a sparse embedding matrix for the `2 norm with associated hash function h : [n] ? [t] for an
arbitrary value of t, and let E be any diagonal matrix. There is a deterministic algorithm to compute
the product ? ? D ? E ? Tq (A) in O((nnz(A) + dtq) log2 q) time.
Proof: By definition of Tq (A), it suffices to prove this when d = 1. Indeed, if we can prove for a
column vector a that the product ? ? D ? E ? Tq (a) can be computed in O((nnz(a) + tq) log2 q) time,
then by linearity if will follow that the product ? ? D ? E ? Tq (A) can be computed in O((nnz(A +
5
Algorithm 1 StructRegression-2
1:
2:
Input: An n ? d matrix A with nnz(A) non-zero entries, an n ? 1 vector b, an integer degree q, and an accuracy parameter ? > 0.
Output: With probability at least .98, a vector x0 ? Rd for which kTq (A)x0 ? bk2 ? (1 + ?) minx kTq (A)x ? bk2 .
3:
4:
5:
6:
Let ? = ?D be a sparse embedding matrix for the `2 norm with t = O((dq)2 /?2 ).
Compute ?Tq (A) using the efficient algorithm of Lemma 9 with E set to the identity matrix.
Compute ?b.
Compute ?0 (?Tq (A)) and ?0 ?b, where ?0 is a subsampled randomized Hadamard transform of Theorem 7 with t0
?
?
O(??2 (log(dq))( dq + log t)2 ) rows.
7:
Output the minimizer x0 of k?0 ?Tq (A)x0 ? ?0 ?bk2 .
=
dtq) log2 q) time for general d. Hence, in what follows, we assume that d = 1 and our matrix A is a
column vector a. Notice that if a is just a column vector, then Tq (A) is equal to Vq,n (a1 , . . . , an )T .
For each k ? [t], define the ordered list Lk = i such that ai 6= 0 and h(i) = k. Let `k = |Lk |.
We define an `k -dimensional vector ? k as follows. If pk (i) is the i-th element of Lk , we set ?ik =
Dpk (i),pk (i) ? Epk (i),pk (i) . Let V k be the submatrix of Vq,n (a1 , . . . , an )T whose rows are in the set
Lk . Notice that V k is itself the transpose of a Vandermonde matrix, where the number of rows of
V k is `k . By Lemma 8, the product ? k V k can be computed in O((`k + q) log2 q) time. Notice that
? k V k is equal to the k-th row of the product ?DETq (a). Therefore, the entire product ?DETq (a)
P
2
2
can be computed in O
k `k log q = O((nnz(a) + tq) log q) time.
Lemma 10 (Efficient Multiplication of Tq (A) on the Right) Let A ? Rn?d . For any vector z,
there is a deterministic algorithm to compute the matrix vector product Tq (A) ? z in O((nnz(A) +
dq) log2 q) time.
The proof is provided in the supplementary material.
Lemma 11 (Efficient Multiplication of Tq (A) on the Left) Let A ? Rn?d . For any vector z,
there is a deterministic algorithm to compute the matrix vector product z ? Tq (A) in O((nnz(A) +
dq) log2 q) time.
The proof is provided in the supplementary material.
3.3
Fast `2 -regression
We start by considering the structured regression problem in the case p = 2. We give an algorithm
for this problem in Algorithm 1.
Theorem 12 Algorithm S TRUCT R EGRESSION -2 solves w.h.p the structured regression with p = 2
in time
O(nnz(A) log2 q) + poly(dq/?).
Proof: By the properties of a sparse embedding matrix (see Section 3.1.2), with probability at least
.99, for t = O((dq)2 /?2 ), we have simultaneously for all y in the span of the columns of Tq (A)
adjoined with b,
(1 ? ?)kyk2 ? k?yk2 ? (1 + ?)kyk2 ,
since the span of this space has dimension at most dq + 1. By Theorem 7, we further have that with
probability .99, for all vectors z in the span of the columns of ?(Tq (A) ? b),
(1 ? ?)kzk2 ? k?0 zk2 ? (1 + ?)kzk2 .
It follows that for all vectors x ? Rd ,
(1 ? O(?))kTq (A)x ? bk2 ? k?0 ?(Tq (A)x ? B)k2 ? (1 + O(?))kTq (A)x ? bk2 .
It follows by a union bound that with probability at least .98, the output of S TRUCT R EGRESSION -2
is a (1 + ?)-approximation.
For the time complexity, ?Tq (A) can be computed in O((nnz(A) + dtq) log2 q) by Lemma 9, while
?b can be computed in O(n) time. The remaining steps can be performed in poly(dq/?) time, and
therefore the overall time is O(nnz(A) log2 q) + poly(dq/?).
6
Algorithm 2 StructRegression-1
1:
2:
Input: An n ? d matrix A with nnz(A) non-zero entries, an n ? 1 vector b, an integer degree q, and an accuracy parameter ? > 0.
Output: With probability at least .98, a vector x0 ? Rd for which kTq (A)x0 ? bk1 ? (1 + ?) minx kTq (A)x ? bk1 .
3:
Let ? = ?E = ?DE be a subspace embedding matrix for the `1 norm with t = (dq + 1)1+? for an arbitrarily small constant
? > 0.
Compute ?Tq (A) = ?ETq (A) using the efficient algorithm of Lemma 9.
Compute ?b = ?Eb.
Compute a QR-decomposition of ?(Tq (A) ? b), where ? denotes the adjoining of column vector b to Tq (A).
Let G be a (dq + 1) ? O(log n) matrix of i.i.d. Gaussians.
Compute R?1 ? G.
Compute (Tq (A) ? b) ? (R?1 G) using the efficient algorithm of Lemma 10 applied to each of the columns of R?1 G.
? 1+?/2 d4+?/2 ??2 ) rows of Tq (A) and corresponding entries of
Let S be the diagonal matrix of Theorem 6 formed by sampling O(q
b using the scheme of Theorem 6.
4:
5:
6:
7:
8:
9:
10:
11:
Output the minimizer x0 of kSTq (A)x0 ? Sbk1 .
3.3.1
Logarithmic Dependence on 1/?
The S TRUCT R EGRESSION -2 algorithm can be modified to obtain a running time with a logarithmic
dependence on ? by combining sketching-based methods with iterative ones.
Theorem 13 There is an algorithm which solves the structured regression problem with p = 2 in
time O((nnz(A) + dq) log(1/?)) + poly(dq) w.h.p.
Due to space limitations the proof is provided in Supplementary material.
3.4
Fast `1 -regression
We now consider the structured regression in the case p = 1. The algorithm in this case is more
complicated than that for p = 2, and is given in Algorithm 2.
Theorem 14 Algorithm S TRUCT R EGRESSION -1 solves w.h.p the structured regression in problem
with p = 1 in time
O(nnz(A) log n log2 q) + poly(dq??1 log n).
The proof is provided in supplementary material.
We note when there is a convex constraint set C the only change in the above algorithms is to
optimize over x0 ? C.
4
Experiments
We report two sets of experiments on classification and regression datasets. The first set of experiments compares generalization performance of our structured nonlinear least squares regression models against standard linear regression, and nonlinear regression with random fourier features [16]. The second set of experiments focus on scalability benefits of sketching. We used
Regularized Least Squares Classification (RLSC) for classification.
Generalization performance is reported in Table 1. As expected, ordinary `2 linear regression is
very fast, especially if the matrix is sparse. However, it delivers only mediocre results. The results
improve somewhat with additive polynomial regression. Additive polynomial regression maintains
the sparsity structure so it can exploit fast sparse solvers. Once we introduce random features,
thereby introducing interaction terms, results improve considerably. When compared with random
Fourier features, for the same number of random features D, additive polynomial regression with
random features get better results than regression with random Fourier features. If the number of
random features is not the same, then if DF ourier = DP oly ? q (where DF ourier is the number of
Fourier features, and DP oly is the number of random features in the additive polynomial regression)
then regression with random Fourier features seems to outperform additive polynomial regression
with random features. However, computing the random features is one of the most expensive steps,
so computing better approximations with fewer random features is desirable.
Figure 1 reports the benefit of sketching in terms of running times, and the trade-off in terms of
accuracy. In this experiment we use a larger sample of the MNIST dataset with 300,000 examples.
7
Ord. Reg.
w/ Fourier Features
7.8%
6.8 sec
D = 500
12%
0.01 sec
3.3%
0.07 sec
q=4
2.8%
0.13 sec
D = 60, q = 4
2.8%
0.14 sec
D = 180
15.5%
0.17 sec
15.5%
0.55 sec
q=4
15.0%
3.9 sec
D = 500, q = 4
15.1%
3.6 sec
D = 1000
7.1%
0.3 sec
7.0%
1.4 sec
q=4
? = 0.2
23.7%
7.8 sec
q=4
6.85%
1.9 sec
D = 500, q = 4,
? = 0.1
20.0%
14.0 sec
D = 200, q = 4
6.5%
2.1 sec
D = 500
? = 0.1
21.3%
15.5 sec
D = 400
Ord. Reg.
Add. Poly. Reg.
MNIST
14%
3.9 sec
classification
n = 60, 000, d = 784
k = 10, 000
CPU
regression
n = 6, 554, d = 21
k = 819
ADULT
classification
n = 32, 561, d = 123
k = 16, 281
CENSUS
regression
n = 18, 186, d = 119
k = 2, 273
25.7%
3.3 sec
FOREST COVER
classification
n = 522, 910, d = 54
k = 58, 102
Table 1:
11%
19.1 sec
q=4
Add. Poly. Reg.
w/ Random Features
6.9%
5.5 sec
D = 300, q = 4
Dataset
Comparison of testing error and training time of the different methods. In the table, n is number of training instances, d is the
number of features per instance and k is the number of instances in the test set. ?Ord. Reg.? stands for ordinary `2 regression. ?Add. Poly.
Reg.? stands for additive polynomial `2 regression. For classification tasks, the percent of testing points incorrectly predicted is reported. For
Sketching
Sampling
Speedup
15
10
5
0
0%
10%
20%
30%
40%
Sketch Size (% of Examples)
(a)
50%
2
10
Sketching
Sampling
0
10
?2
10
0%
10%
20%
30%
40%
Sketch Size (% of Examples)
50%
Classification Error on Test Set (%)
20
Suboptimality of residual
regression tasks, we report kyp ? yk2 / kyk where yp is the predicted values and y is the ground truth.
7
Sketching
Sampling
Exact
6
5
4
3
0%
10%
20%
30%
40%
Sketch Size (% of Examples)
(b)
Figure 1: Examining the performance of sketching.
50%
(c)
We compute 1,500 random features, and then solve the corresponding additive polynomial regression problem with q = 4, both exactly and with sketching to different number of rows. We also
tested a sampling based approach which simply randomly samples a subset of the rows (no sketching). Figure 1 (a) plots the speedup of the sketched method relative to the exact solution. In these
experiments we use a non-optimized straightforward implementation that does not exploit fast Vandermonde multiplication or parallel processing. Therefore, running times were measured using a
sequential execution. We measured only the time required to solve the regression problem. For
this experiment we use a machine with two quad-core Intel E5410 @ 2.33GHz, and 32GB DDR2
800MHz RAM. Figure 1 (b) explores
in solving
the regression problem. More
the sub-optimality
specifically, we plot (kYp ? Y kF ?
Yp? ? Y
F )/
Yp? ? Y
F where Y is the labels matrix, Yp? is
the best approximation (exact solution), and Yp is the sketched solution. We see that indeed the error
decreases as the size of the sampled matrix grows, and that with a sketch size that is not too big we
get to about a 10% larger objective. In Figure 1 (c) we see that this translates to an increase in error
rate. Encouragingly, a sketch as small as 15% of the number of examples is enough to have a very
small increase in error rate, while still solving the regression problem more than 5 times faster (the
speedup is expected to grow for larger datasets).
Acknowledgements
The authors acknowledge the support from XDATA program of the Defense Advanced Research
Projects Agency (DARPA), administered through Air Force Research Laboratory contract FA875012-C-0323.
8
References
[1] C. Boutsidis and A. Gittens. Improved matrix algorithms via the Subsampled Randomized
Hadamard Transform. ArXiv e-prints, Mar. 2012. To appear in the SIAM Journal on Matrix
Analysis and Applications.
[2] P. Buhlmann and S. v. d. Geer. Statistics for High-dimensional Data. Springer, 2011.
[3] Y. Chang, C. Hsieh, K. Chang, M. Ringgaard, and C. Lin. Low-degree polynomial mapping
of data for svm. JMLR, 11, 2010.
[4] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theoretical Computer Science, 312(1):3 ? 15, 2004. ?ce:title?Automata, Languages and Programming?/ce:title?.
[5] K. L. Clarkson, P. Drineas, M. Magdon-Ismail, M. W. Mahoney, X. Meng, and D. P. Woodruff.
The Fast Cauchy Transform and faster faster robust regression. CoRR, abs/1207.4684, 2012.
Also in SODA 2013.
[6] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In
Proceedings of the 41st annual ACM Symposium on Theory of Computing, STOC ?09, pages
205?214, New York, NY, USA, 2009. ACM.
[7] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity
time. In Proceedings of the 45th annual ACM Symposium on Theory of Computing, STOC ?13,
pages 81?90, New York, NY, USA, 2013. ACM.
[8] A. Dasgupta, P. Drineas, B. Harb, R. Kumar, and M. Mahoney. Sampling algorithms and
coresets for `p regression. SIAM Journal on Computing, 38(5):2060?2078, 2009.
[9] A. Gilbert and P. Indyk. Sparse recovery using sparse matrices. Proceedings of the IEEE,
98(6):937?947, 2010.
[10] N. Halko, P. G. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217?
288, 2011.
[11] Q. Le, T. Sarl?os, and A. Smola. Fastfood computing hilbert space expansions in loglinear
time. In Proceedings of International Conference on Machine Learning, ICML ?13, 2013.
[12] M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in
Machine Learning, 3(2):123?224, 2011.
[13] M. W. Mahoney, P. Drineas, M. Magdon-Ismail, and D. P. Woodruff. Fast approximation of
matrix coherence and statistical leverage. In Proceedings of the 29th International Conference
on Machine Learning, ICML ?12, 2012.
[14] X. Meng and M. W. Mahoney. Low-distortion subspace embeddings in input-sparsity time and
applications to robust linear regression. In Proceedings of the 45th annual ACM Symposium
on Theory of Computing, STOC ?13, pages 91?100, New York, NY, USA, 2013. ACM.
[15] J. Nelson and H. L. Nguyen. OSNAP: Faster numerical linear algebra algorithms via sparser
subspace embeddings. CoRR, abs/1211.1002, 2012.
[16] R. Rahimi and B. Recht. Random features for large-scale kernel machines. In Proceedings of
Neural Information Processing Systems, NIPS ?07, 2007.
[17] V. Rokhlin and M. Tygert. A fast randomized algorithm for overdetermined linear least-squares
regression. Proceedings of the National Academy of Sciences, 105(36):13212, 2008.
[18] W. Rudin. Fourier Analysis on Groups. Wiley Classics Library. Wiley-Interscience, New York,
1994.
[19] T. Sarl?os. Improved approximation algorithms for large matrices via random projections. In
Proceeding of IEEE Symposium on Foundations of Computer Science, FOCS ?06, pages 143?
152, 2006.
[20] C. Sohler and D. P. Woodruff. Subspace embeddings for the l1-norm with applications. In
Proceedings of the 43rd annual ACM Symposium on Theory of Computing, STOC ?11, pages
755?764, 2011.
[21] D. P. Woodruff and Q. Zhang. Subspace embeddings and lp regression using exponential
random variables. In COLT, 2013.
9
| 5113 |@word mild:1 polynomial:18 norm:9 seems:1 nd:1 km:7 d2:2 covariance:1 decomposition:3 hsieh:1 incurs:1 thereby:1 versatile:1 series:2 contains:1 woodruff:9 com:2 si:2 written:1 truct:4 additive:14 numerical:2 confirming:1 dtq:3 designed:1 plot:2 hash:1 rrt:1 fewer:1 device:1 item:1 kyk:1 rudin:1 ksm:1 core:1 provides:2 zhang:2 height:1 symposium:5 ik:1 focs:1 prove:2 fitting:3 interscience:1 introduce:1 x0:16 expected:2 indeed:2 cpu:1 quad:1 solver:7 considering:1 begin:1 provided:4 notation:1 linearity:1 osnap:1 project:1 what:3 complimentary:1 developed:1 ag:5 transformation:1 finding:2 avron:1 expands:1 runtime:4 exactly:1 k2:2 yn:1 appear:1 id:4 meng:2 eb:1 k:1 co:1 bi:1 range:2 practical:2 testing:2 union:1 block:3 procedure:1 nnz:22 empirical:3 thought:1 projection:1 get:2 operator:3 storage:3 context:3 applying:1 intercept:1 mediocre:1 optimize:1 gilbert:1 deterministic:4 map:7 center:2 straightforward:1 independently:2 convex:3 automaton:1 simplicity:1 recovery:1 immediately:2 d1:1 spanned:2 orthonormal:1 embedding:8 srht:1 handle:1 classic:1 target:2 construction:1 suppose:4 exact:3 losing:1 programming:1 us:1 overdetermined:1 element:2 trend:1 approximated:1 particularly:1 expensive:1 trade:1 decrease:1 rq:1 pd:2 agency:1 ui:1 kmi:1 complexity:1 kaxk2:4 motivate:1 solving:4 algebra:2 basis:5 drineas:3 accelerate:3 darpa:1 various:2 fast:13 describe:1 kp:1 encouragingly:1 choosing:1 sarl:2 whose:2 richer:1 supplementary:4 solve:3 larger:3 say:2 distortion:2 statistic:1 transform:5 itself:1 indyk:1 interaction:4 product:10 frequent:1 combining:2 hadamard:3 achieve:1 academy:1 ismail:2 frobenius:1 scalability:1 qr:3 develop:1 measured:2 advocated:1 solves:4 predicted:2 implies:3 closely:1 modifying:1 material:4 require:2 suffices:1 generalization:2 randomization:2 ground:1 kyp:2 exp:3 maclaurin:2 algorithmic:1 mapping:2 estimation:1 label:1 title:2 create:2 adjoined:1 gaussian:10 always:1 modified:1 focus:1 nd2:1 rank:2 streaming:1 typically:2 entire:2 kernelized:1 expand:1 subroutine:1 sketched:3 arg:2 overall:1 flexible:1 classification:8 almaden:1 denoted:1 colt:1 development:1 art:1 constrained:1 smoothing:1 equal:3 once:1 sampling:8 sohler:1 broad:1 icml:2 others:1 t2:1 report:3 oblivious:3 randomly:1 simultaneously:6 national:1 subsampled:3 argminx:2 tq:34 ab:2 interest:1 mahoney:6 adjoining:1 accurate:1 necessary:1 theoretical:2 kij:2 column:14 modeling:5 instance:3 cover:1 mhz:1 ordinary:2 cost:1 introducing:1 deviation:1 subset:1 entry:6 examining:1 too:2 reported:2 straightforwardly:1 considerably:1 recht:2 density:1 explores:1 randomized:13 siam:3 st:1 international:2 contract:1 off:1 probabilistic:1 sketching:20 quickly:1 choose:1 henceforth:1 admit:2 yp:5 de:1 sec:20 coresets:1 coefficient:1 explicitly:1 kzk2:2 axk1:1 stream:1 sine:2 h1:1 performed:1 start:1 maintains:1 complicated:1 oly:2 parallel:1 contribution:2 square:6 formed:1 accuracy:5 air:1 farach:1 generalize:1 epk:1 zx:1 randomness:2 bk1:2 definition:5 against:1 ktq:10 failure:2 boutsidis:2 frequency:1 naturally:3 associated:3 proof:6 sampled:4 dpwoodru:1 dataset:4 recall:1 knowledge:1 dimensionality:2 hilbert:1 higher:1 hashing:1 originally:1 follow:1 improved:3 though:1 mar:1 xk1:4 implicit:3 just:1 smola:1 sketch:7 eqn:2 tropp:1 ei:2 nonlinear:7 o:2 quality:1 grows:1 usa:3 k22:2 requiring:1 counterpart:1 regularization:1 hence:1 laboratory:1 sin:1 kyk2:2 yorktown:1 cosine:2 d4:1 suboptimality:1 generalized:1 pdf:1 outline:1 ridge:1 l1:3 delivers:1 percent:1 wise:1 recently:4 fi:3 conditioning:2 exponentially:1 extend:1 martinsson:1 approximates:1 elementwise:1 m1:1 significant:1 bk22:1 refer:1 ai:1 rd:12 unconstrained:1 trivially:1 similarly:1 xdata:1 nonlinearity:1 language:1 pq:6 harb:1 moving:1 yk2:2 tygert:1 add:3 multivariate:4 isometry:1 recent:2 certain:1 binary:1 watson:1 arbitrarily:2 yi:1 exploited:3 additional:3 somewhat:1 impose:1 surely:1 bochner:1 full:1 desirable:1 reduces:1 rahimi:2 smooth:1 technical:1 faster:11 lin:1 rlsc:1 a1:5 scalable:2 regression:67 basic:3 essentially:1 expectation:1 df:2 arxiv:1 kernel:13 preserved:1 background:2 tku:1 separately:1 grow:1 extra:1 subject:1 integer:2 presence:1 leverage:1 easy:1 embeddings:6 fft:1 enough:1 fit:1 zi:5 gk1:1 inner:1 idea:2 translates:1 shift:1 administered:1 t0:5 motivated:2 defense:1 gb:1 clarkson:4 york:4 dramatically:1 covered:1 involve:1 nonparametric:2 outperform:1 zj:3 notice:3 per:1 broadly:1 zd:1 intertwined:1 dropping:1 dasgupta:1 group:2 key:1 achieving:1 ce:2 ram:1 znt:1 run:1 jose:1 soda:1 almost:1 family:3 coherence:1 submatrix:1 bound:2 hi:5 haim:1 annual:4 constraint:8 precisely:1 n3:1 fourier:12 u1:1 min:3 span:4 optimality:1 kumar:1 speedup:5 structured:18 charikar:2 smaller:1 lp:4 appealing:3 gittens:2 ddr2:1 axk2:2 intuitively:1 restricted:3 invariant:1 census:1 computationally:1 ln:2 vq:8 turn:2 discus:1 mechanism:1 needed:1 zk2:1 operation:1 gaussians:2 magdon:2 apply:1 upto:2 appropriate:1 robustness:1 vikas:1 existence:1 original:1 denotes:2 running:4 include:1 remaining:2 kzx:3 log2:16 exploit:2 giving:1 k1:6 especially:1 approximating:1 classical:2 objective:1 question:2 print:1 occurs:1 parametric:3 primary:2 dependence:4 rt:2 concentration:1 md:1 diagonal:6 loglinear:1 minx:3 lends:1 subspace:12 hq:1 separate:1 dp:2 nelson:1 argue:1 cauchy:1 preconditioned:1 assuming:1 setup:2 potentially:1 stoc:4 gk:1 ringgaard:1 design:6 implementation:1 unknown:1 ord:3 datasets:4 sm:2 acknowledge:1 beat:1 incorrectly:1 y1:1 rn:17 arbitrary:1 buhlmann:1 david:1 introduced:1 required:1 optimized:2 z1:1 nip:1 adult:1 able:1 below:2 egression:4 sparsity:8 program:1 including:1 suitable:2 force:1 dpk:1 regularized:1 residual:1 advanced:1 scheme:2 improve:2 imply:1 library:1 lk:4 xq:3 review:1 literature:1 sg:4 acknowledgement:1 kf:2 multiplication:6 relative:2 limitation:1 vandermonde:11 foundation:2 degree:8 imposes:1 article:1 dq:23 principle:1 bk2:5 pi:6 ibm:4 row:13 penalized:1 vsindhw:1 repeat:1 transpose:1 infeasible:1 monomial:3 taking:1 absolute:1 sparse:11 benefit:4 distributed:1 ghz:1 dimension:2 xn:6 gram:2 cumulative:1 stand:2 kz:2 ignores:1 author:1 san:1 nguyen:1 kzi:1 approximate:3 kaxk1:6 confirm:1 colton:1 continuous:1 iterative:2 un:1 table:3 robust:4 ca:1 obtaining:1 forest:1 expansion:2 kui:2 poly:12 constructing:1 pk:3 dense:1 main:1 fastfood:1 big:1 arise:2 n2:1 x1:5 intel:1 ny:4 wiley:2 ose:2 sub:1 explicit:2 exponential:4 jmlr:1 rk:1 theorem:15 showing:1 explored:1 list:1 admits:1 svm:1 mnist:2 sequential:1 corr:2 execution:1 conditioned:4 nk:4 chen:1 sparser:1 logarithmic:3 halko:1 simply:1 univariate:2 desire:1 kxk:1 ordered:1 scalar:1 sindhwani:1 applies:1 springer:1 chang:2 corresponds:1 minimizer:2 truth:1 acm:7 cdf:1 z1t:1 goal:3 identity:2 acceleration:1 e5410:1 change:1 specifically:1 except:1 operates:1 lemma:13 geer:1 gij:1 rokhlin:1 support:2 arises:1 incorporate:1 reg:6 tested:1 |
4,547 | 5,114 | Trading Computation for Communication:
Distributed Stochastic Dual Coordinate Ascent
Tianbao Yang
NEC Labs America, Cupertino, CA 95014
[email protected]
Abstract
We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We
make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments
on real data sets. Moreover, we compare the proposed algorithm with distributed
stochastic gradient descent methods and distributed alternating direction methods
of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances.
1
Introduction
In recent years of machine learning applications, the size of data has been observed with an unprecedented growth. In order to efficiently solve large scale machine learning problems with millions of
and even billions of data points, it has become popular to take advantage of the computational power
of multi-cores in a single machine or multi-machines on a cluster to optimize the problems in a parallel fashion or a distributed fashion [2].
In this paper, we consider the following generic optimization problem arising ubiquitously in supervised machine learning applications:
n
1X
min P (w), where P (w) =
?(w> xi ; yi ) + ?g(w),
(1)
n i=1
w?Rd
where w ? Rd denotes the linear predictor to be optimized, (xi , yi ), xi ? Rd , i = 1, . . . , n denote
the instance-label pairs of a set of data points, ?(z; y) denotes a loss function and g(w) denotes a
regularization on the linear predictor. Throughout the paper, we assume the loss function ?(z; y) is
convex w.r.t the first argument and we refer to the problem in (1) as Regularized Loss Minimization
(RLM) problem.
The RLM problem has been studied extensively in machine learning, and many efficient sequential
algorithms have been developed in the past decades [8, 16, 10]. In this work, we aim to solve
the problem in a distributed framework by leveraging the capabilities of tens of hundreds of CPU
cores. In contrast to previous works of distributed optimization that are based on either (stochastic)
gradient descent (GD and SGD) methods [21, 11] or alternating direction methods of multipliers
(ADMM) [2, 23], we motivate our research from the recent advances on (stochastic) dual coordinate
ascent (DCA and SDCA) algorithms [8, 16]. It has been observed that DCA and SDCA algorithms
can have comparable and sometimes even better convergence speed than GD and SGD methods.
However, it lacks efforts in studying them in a distributed fashion and comparing to those SGDbased and ADMM-based distributed algorithms.
1
In this work, we bridge the gap by developing a Distributed Stochastic Dual Coordinate Ascent
(DisDCA) algorithm for solving the RLM problem. We summarize the proposed algorithm and our
contributions as follows:
? The presented DisDCA algorithm possesses two key characteristics: (i) parallel computation over K machines (or cores); (ii) sequential updating of m dual variables per iteration
on individual machines followed by a ?reduce? step for communication among processes.
It enjoys a strong guarantee of convergence rates for smooth or no-smooth loss functions.
? We analyze the tradeoff between computation and communication of DisDCA invoked by
m and K. Intuitively, increasing the number m of dual variables per iteration aims at
reducing the number of iterations for convergence and therefore mitigating the pressure
caused by communication. Theoretically, our analysis reveals the effective region of m, K
versus the regularization path of ?.
? We present a practical variant of DisDCA and make a comparison with distributed ADMM.
We verify our analysis by experiments and demonstrate the effectiveness of DisDCA by
comparing with SGD-based and ADMM-based distributed optimization algorithms running in the same distributed framework.
2
Related Work
Recent years have seen the great emergence of distributed algorithms for solving machine learning
related problems [2, 9]. In this section, we focus our review on distributed optimization techniques.
Many of them are based on stochastic gradient descent methods or alternating direction methods of
multipliers.
Distributed SGD methods utilize the computing resources of multiple machines to handle a large
number of examples simultaneously, which to some extent alleviates the high computational load
per iteration of GD methods and also improve the performances of sequential SGD methods. The
simplest implementation of a distributed SGD method is to calculate the stochastic gradients on
multiple machines, and to collect these stochastic gradients for updating the solution on a master
machine. This idea has been implemented in a MapReduce framework [13, 4] and a MPI framework [21, 11]. Many variants of GD methods have be deployed in a similar style [1]. ADMM
has been employed for solving machine learning problems in a distributed fashion [2, 23], due to
its superior convergences and performances [5, 23]. The original ADMM [7] is proposed for solving equality constrained minimization problems. The algorithms that adopt ADMM for solving
the RLM problems in a distributed framework are based on the idea of global variable consensus.
Recently, several works [19, 14] have made efforts to extend ADMM to its online or stochastic
versions. However, they suffer relatively low convergence rates.
The advances on DCA and SDCA algorithms [12, 8, 16] motivate the present work. These studies
have shown that in some regimes (e.g., when a relatively high accurate solution is needed), SDCA
can outperform SGD methods. In particular, S. Shalev-Shwartz and T. Zhang [16] have derived
new bounds on the duality gap, which have been shown to be superior to earlier results. However,
there still lacks of efforts in extending these types of methods to a distributed fashion and comparing
them with SGD-based algorithms and ADMM-based distributed algorithms. We bridge this gap by
presenting and studying a distributed stochastic dual ascent algorithm. It has been brought to our
attention that M. Tak?ac et al. [20] have recently published a paper to study the parallel speedup of
mini-batch primal and dual methods for SVM with hinge loss and establish the convergence bounds
of mini-batch Pegasos and SDCA depending on the size of the mini-batch. This work differentiates from their work in that (i) we explicitly take into account the tradeoff between computation
and communication; (ii) we present a more practical variant and make a comparison between the
proposed algorithm and ADMM in view of solving the subproblems, and (iii) we conduct empirical
studies for comparison with these algorithms. Other related but different work include [3], which
presents Shotgun, a parallel coordinate descent algorithm for solving `1 regularized minimization
problems.
There are other unique issues arising in distributed optimization, e.g., synchronization vs asynchronization, star network vs arbitrary network. All these issues are related to the tradeoff between
communication and computation [22, 24]. Research in these aspects are beyond the scope of this
work and can be considered as future work.
2
3
Distributed Stochastic Dual Coordinate Ascent
In this section, we present a distributed stochastic dual coordinate ascent (DisDCA) algorithm and
its convergence bound, and analyze the tradeoff between computation and communication. We also
present a practical variant of DisDCA and make a comparison with ADMM. We first present some
notations and preliminaries.
For simplicity of presentation, we let ?i (w> xi ) = ?(w> xi ; yi ). Let ??i (?) and g ? (v) be the convex
conjugate of ?i (z) and g(w), respectively. We assume g ? (v) is continuous differentiable. It is easy
to show that the problem in (1) has a dual problem given below:
!
n
n
1X
1 X
?
?
max D(?), where D(?) =
??i (??i ) ? ?g
? i xi .
(2)
??Rn
n i=1
?n i=1
Let w? be the optimal solution to the primal problem
in (1) and ?? be the optimal solution to the
Pn
1
?
dual problem in (2). If we define v(?) = ?n
i=1 ?i xi , and w(?) = ?g (v), it can be verified
that w(?? ) = w? , P (w(?? )) = D(?? ). In this paper, we aim to optimize the dual problem (2)
in a distributed environment where the data are distributed evenly across over K machines. Let
(xk,i , yk,i ), i = 1, . . . , nk denote the training examples on machine k. For ease of analysis, we
assume nk = n/K. We denote by ?k,i the associated dual variable of xk,i , and by ?k,i (?), ??k,i (?)
the corresponding loss function and its convex conjugate. To simplify the analysis of our algorithm
and without loss of generality, we make the following assumptions about the problem:
? ?i (z) is either a (1/?)-smooth function or a L-Lipschitz continuous function (c.f. the
definitions given below). Exemplar smooth loss functions include e.g., L2 hinge loss
?i (z) = max(0, 1 ? yi z)2 , logistic loss ?i (z) = log(1 + exp(?yi z)). Commonly used
Lipschitz continuous functions are L1 hinge loss ?i (z) = max(0, 1 ? yi z) and absolute
loss ?i (z) = |yi ? z|.
? g(w) is a 1-strongly convex function w.r.t to k ? k2 . Examples include `2 norm square
1/2kwk22 and elastic net 1/2kwk22 + ?kwk1 .
? For all i, kxi k2 ? 1, ?i (z) ? 0 and ?i (0) ? 1.
Definition 1. A function ?(z) : R ? R is a L-Lipschitz continuous function, if for all a, b ? R
|?(a) ? ?(b)| ? L|a ? b|. A function ?(z) : R ? R is (1/?)-smooth, if it is differentiable and its
gradient ??(z) is (1/?)-Lipschitz continuous, or for all a, b ? R, we have ?(a) ? ?(b) + (a ?
1
b)> ??(b) + 2?
(a ? b)2 . A convex function g(w) : Rd ? R is ?-strongly convex w.r.t a norm k ? k,
if for any s ? [0, 1] and w1 , w2 ? Rd , g(sw1 + (1 ? s)w2 ) ? sg(w1 ) + (1 ? s)g(w2 ) ? 12 s(1 ?
s)?kw1 ? w2 k2 .
3.1
DisDCA Algorithm: The Basic Variant
The detailed steps of the basic variant of the DisDCA algorithm are described by a pseudo code in
Figure 1. The algorithm deploys K processes running simultaneously on K machines (or cores)1 ,
each of which only accesses its associated training examples. Each machine calls the same procedure SDCA-mR, where mR manifests two unique characteristics of SDCA-mR compared to SDCA.
(i) At each iteration of the outer loop, m examples instead of one are randomly sampled for updating
their dual variables. This is implemented by an inner loop that costs the most computation at each
outer iteration. (ii) After updating the m randomly selected dual variables, it invokes a function
Reduce to collect the updated information from all K machines that accommodates naturally to the
distributed environment. The Reduce function acts exactly like MPI::AllReduce
Pm if one wants to
1
implement the algorithm in a MPI framework. It essentially sends ?vk = ?n
j=1 ??k,ij xij to a
process, adds all of them to v t?1 , and then broadcasts the updated v t to all K processes. It is this step
that involves the communication among the K machines. Intuitively, smaller m yields less computation and slower convergence and therefore more communication and vice versa. In next subsection,
we would give a rigorous analysis about the convergence, computation and communication.
Remark: The goal of the updates is to increase the dual objective. The particular options presented
in routine IncDual is to maximize the lower bounds of the dual objective. More options are provided
1
We use process and machine interchangeably.
3
DisDCA Algorithm (The Basic Variant)
Start K processes by calling the following procedure SDCA-mR with input m and T
Procedure SDCA-mR
Input: number of iterations T , number of samples m at each iteration
Let: ?k0 = 0, v 0 = 0, w0 = ?g ? (0)
Read Data: (xk,i , yk,i ), i = 1, ? ? ? , nk
Iterate: for t = 1, . . . , T
Iterate: for j = 1, . . . , m
Randomly pick i ? {1, ? ? ? , nk } and let ij = i
Find ??k,i by calling routine IncDual(w = wt?1 , scl = mK)
t?1
t
Set ?k,i
= ?k,i
+ ??
Pm k,i
1
t
Reduce: v : ?n j=1 ??k,ij xk,ij ? v t?1
Update: wt = ?g ? (v t )
Routine IncDual(w, scl)
Option I:
scl
t?1
Let ??k,i = max ???k,i (?(?k,i
+ ??)) ? ??x>
(??)2 kxk,i k22
k,i w ?
??
2?n
Option II:
t?1
t?1
Let zk,i
= ???k,i (x>
k,i w) ? ?k,i
t?1
Let ??k,i = sk,i zk,i where sk,i ? [0, 1] maximize
?s(1 ? s) t?1 2
scl 2 t?1 2
t?1
t?1 >
t?1
s(??k,i (??k,i
) + ?k,i (x>
) + zk,i
xk,i w) +
(zk,i ) ?
s (zk,i ) kxk,i k22
k,i w
2
2?n
Figure 1: The Basic Variant of the DisDCA Algorithm
in supplementary materials. The solutions to option I have closed forms for several loss functions
(e.g., L1 , L2 hinge losses, square loss and absolute loss) [16]. Note that different from the options
presented in [16], the ones in Incdual use a slightly different scalar factor mK in the quadratic term
to adapt for the number of updated dual variables.
3.2
Convergence Analysis: Tradeoff between Computation and Communication
In this subsection, we present the convergence bound of the DisDCA algorithm and analyze the
tradeoff between computation, convergence or communication. The theorem below states the convergence rate of DisDCA algorithm for smooth loss functions (The omitted proofs and other derivations can be found in supplementary materials) .
Theorem 1. For a (1/?)-smooth loss function ?i and a 1-strongly convex function g(w), to obtain
an p duality gap of E[P (wT ) ? D(?T )] ? P , it suffices to have
1
n
1
1
n
+
log
+
.
T ?
mK
??
mK
?? P
Remark: In [20], the authors established a convergence bound of mini-batch SDCA for L1 -SVM
that depends on the spectral norm of the data. Applying their trick to our algorithmic framework is
equivalent to replacing the scalar mK in DisDCA algorithm with ?mK that characterizes the spectral
norm of sampled data across all machines XmK = (x11 , . . . x1m , . . . , xKm ). The resulting conver1
mK 1
with ?mK
gence bound for (1/?)-smooth loss functions is given by substituting the term ??
?? .
The value of ?mK is usually smaller than mK and the authors in [20] have?provided an expression
?
for computing ?mK based on the spectral norm ? of the data matrix X/ n = (x1 , . . . xn )/ n.
However, in practice the value of ? cannot be computed exactly. A safe upper bound of ? = 1
assuming kxi k2 ? 1 gives the value mK to ?mK , which reduces to the scalar as presented in Figure 1. The authors in [20] also presented an aggressive variant to adjust ? adaptively and observed
improvements. In Section 3.3 we develop a practical variant that enjoys more speed-up compared to
the basic variant and their aggressive variant.
Tradeoff between Computation and Communication We are now ready to discuss the tradeoff
between computation and communication based on the worst case analysis as indicated by Theo4
rem 1. For the analysis of tradeoff between computation and communication invoked by the number
of samples m and the number of machines K, we fix the number of examples n and the number of
dimensions d. When we analyze the tradeoff involving m, we fix K and vice versa. In the following analysis, we assume the size of model to be communicated is fixed d and is independent of m,
though in some cases (e.g., high dimensional sparse data) one may communicate a smaller size of
data that depends on m.
It is notable that in the bound of the number of iterations, there is a term 1/(??). To take this term
into account, we first consider an interesting region of ? for achieving a good generalization error.
Several pieces of works [17, 18, 6] have suggested that in order to obtain an optimal generalization
error, the optimal
like ?(n?1/(1+? ) ), where ? ? (0, 1]. For example, the analysis in [18]
? scales
suggests ? = ?
?1
n
for SVM.
First, we consider the tradeoff involving the number of samples m by fixing the number processes K. We note that the communication cost is proportional to the number of iterations
1/(1+? )
n
T = ? mK
+n ?
, while the computation cost per node is proportional to mT =
? /(1+? )
1/(1+? )
n
? K
+ mn ?
due to that each iteration involves m examples. When m ? ? n K
,
the communication cost decreases as m increases, and the computation costs increases
as
m
in ? /(1+? )
,
creases, though it is dominated by ?(n/K). When the value of m is greater than ? n K
1/(1+? )
the communication cost is dominated by ? n ?
, then increasing the value of m will become
less influential on reducing the communication cost; while the computation cost would blow up
substantially.
Similarly, we can also understand how the number
K affects
the tradeoff between the com of nodes
n
Kn1/(1+? )
2
?
?
munication cost, proportional to ?(KT ) = ? m +
, and the computation cost, pro
? /(1+??)
1/(1+? )
n
portional to ? K
+ mn ?
. When K ? ? n m
, as K increases the computation cost
? /(1+? )
would decrease and the communication cost would increase. When it is greater than ? n m
,
1/(1+? )
the computation cost would be dominated by ? mn ?
and the effect of increasing K on
reducing the computation cost would diminish.
According to the above analysis, we can conclude that when mK ? ? (n??), to which we refer as
the effective region of m and K, the communication cost can be reduced by increasing the number
of samples m and the computation cost can be reduced by increasing the number of nodes K.
Meanwhile, increasing the number of samples m would increase the computation cost and similarly
increasing the number of nodes K would increase the communication cost. It is notable that the
larger the value of ? the wider the effective region of m and K, and vice versa. To verify the tradeoff
of communication and computation, we present empirical studies in Section 4. Although the smooth
loss functions are the most interesting, we present in the theorem below about the convergence of
DisDCA for Lipschitz continuous loss functions.
Theorem 2. For a L-Lipschitz continuous loss function ?i and a 1-strongly convex function g(w),
to obtain an P duality gap E[P (w
?T ) ? D(?
? T )] ? P , it suffices to have
n
20L2
n
?n
n
4L2
+ T0 +
?
+ max 0,
log
+
,
T ?
2
?P
mK
?P
mK
2mKL
mK
PT ?1
PT ?1
where w
?T = t=T0 wt /(T ? T0 ), ?
? T = t=T0 ?t /(T ? T0 ).
Remark: In this case, the effective region of m and K is mK ? ?(n?P ) which is narrower than
that for smooth loss functions, especially when P ?. Similarly, if one can obtain an accurate
estimate of the spectral norm of all data and use ?mK in place of mK in Figure 1, the convergence
4L2 ?mK
4L2
bound can be improved with ?
in place of ?
. Again, the practical variant presented in next
P mK
P
section yields more speed-up.
2
We simply ignore the communication delay in our analysis.
5
the practical updates at the t-th iteration
u0t
t?1
Initialize:
=w
Iterate: for j = 1, . . . , m
Randomly pick i ? {1, ? ? ? , nk } and let ij = i
Find ??k,i by calling routine IncDual(w = uj?1
, scl = k)
t
t?1
t
Update ?k,i
= ?k,i
+ ??k,i and update ujt = uj?1
+ ?n1 k ??k,i xk,i
t
Figure 2: the updates at the t-th iteration of the practical variant of DisDCA
3.3
A Practical Variant of DisDCA and A Comparison with ADMM
In this section, we first present a practical variant of DisDCA motivated by intuition and then we
make a comparison between DisDCA and ADMM, which provides us more insight about the practical variant of DisDCA and differences between the two algorithms. In what follows, we are particularly interested in `2 norm regularization where g(w) = kwk22 /2 and v = w.
A Practical Variant We note that in Algorithm 1, when updating the values of the following sampled dual variables, the algorithm does not use the updated information but instead wt?1 from last
iteration. Therefore a potential improvement would be leveraging the up-to-date information for
updating the dual variables. To this end, we maintain a local copy of wk in each machine. At
the beginning of the iteration t, all wk0 , k = 1, ? ? ? , K are synchronized with the global wt?1 .
Then in individual machines, the j-th sampled dual variable is updated by IncDual(wkj?1 , k) and
the local copy wkj is also updated by wkj = wkj?1 + ?n1 k ??k,ij xk,ij for updating the next dual
variable. At the end of the iteration, the local solutions are synchronized to the global variable
PK Pm
1
t
wt = wt?1 + ?n
k=1
j=1 ??k,ij xk,ij . It is important to note that the scalar factor in IncDual
is now k because the dual variables are updated incrementally and there are k processes running
parallell. The detailed steps are presented in Figure 2, where we abuse the same notation ujt for the
local variable at all processes. The experiments in Section 4 verify the improvements of the practical
variant vs the basic variant. It still remains an open problem to us what is the convergence bound
of this practical variant. However, next we establish a connection between DisDCA and ADMM
that sheds light on the motivation behind the practical variant and the differences between the two
algorithms.
A Comparison with ADMM First we note that the goal of the updates at each iteration in DisDCA
is to increase the dual objective by maximizing the following objective:
2
m
m
X
?
1 X
???i (??i ) ?
w
? t?1 + 1/(?nk )
?i xi
,
(3)
max
? nk
2
i=1
i=1
2
Pm
where w
? t?1 = wt?1 ? 1/(?nk ) i=1 ?it?1 xi and we suppress the subscript k associated with each
machine. The updates presented in Algorithm 1 are solutions to maximizing the lower bounds of
the above objective function by decoupling the m dual variables. It is not difficult to derive that the
dual problem in (3) has the following primal problem (a detailed derivation and others can be found
in supplementary materials):
!
2
m
m
X
1 X
?
t?1
>
t?1
DisDCA: min
?i (xi w) +
w ? w
? 1/(?nk )
?i xi
.
(4)
w nk
2
i=1
i=1
t
2
We refer to w
? as the penalty solution. Second let us recall the updating scheme in ADMM. The
(deterministic) ADMM algorithm at iteration t solves the following problems in each machine:
nk
?K
1 X
t
2
?i (x>
kw ? (wt?1 ? ut?1
(5)
ADMM: wk = arg min
i w) +
k ) k2 ,
w nk
2
|
{z
}
i=1
w
? t?1
where ? is a penalty parameter and wt?1 is the global primal variable updated by
wt =
K
K
?K(w
?t + u
?t?1 )
1 X t t?1
1 X t?1
, with w
?t =
wk , u
?
=
uk ,
?K + ?
K
K
k=1
6
k=1
and ut?1
is the local ?dual? variable updated by utk = ut?1
+ wkt ? wt . Comparing the subprobk
k
lem (4) in DisDCA and the subproblem (5) in ADMM leads to the following observations. (1) Both
aim at solving the same type of problem to increase the dual objective or decrease the primal objective. DisDCA uses only m randomly selected examples while ADMM uses all examples. (2)
However, the penalty solution w
? t?1 and the penalty parameter are different. In DisDCA, w
? t?1 is
constructed by subtracting from the global solution the local solution defined by the dual variables
?, while in ADMM it is constructed by subtracting from the global solution the local Lagrangian
variables u. The penalty parameter in DisDCA is given by the regularization parameter ? while that
in ADMM is a parameter that is needed to be specified by the user.
Now, let us explain the practical variant of DisDCA from the viewpoint of inexactly solving the
?
subproblem (4). Note that if the optimal solution to (3) is denoted
Pm by? ?i , i = 1, . . . , m, then
1
?
?
t?1
the optimal solution u to (4) is given by u = w
?
+ ?nk i=1 ?i xi . In fact, the updates
at the t-th iteration of the practical variant of DisDCA is to optimize the subproblem (4) by the
SDCA algorithm with only one pass of the sampled data points and an initialization of ?i0 =
?it?1 , i = 1 . . . , m. It means that the initial primal solution for solving the subproblem (3) is
Pm
u0 = w
? t?1 + ?n1 k i=1 ?it?1 xi = wt?1 . That explains the initialization step in Figure 2.
In a recent work [23] of applying ADMM to solving the L2 -SVM problem in the same distributed
fashion, the authors exploited different strategies for solving the subproblem (5) associated with
L2 -SVM, among which the DCA algorithm with only one pass of all data points gives the best
performance in terms of running time (e.g., it is better than DCA with several passes of all data
points and is also better than a trusted region Newton method). This from another point of view
validates the practical variant of DisDCA.
Finally, it is worth to mention that unlike ADMM whose performance is significantly affected by
the value of the penalty parameter ?, DisDCA is a parameter free algorithm.
4
Experiments
In this section, we present some experimental results to verify the theoretical results and the empirical performances of the proposed algorithms. We implement the algorithms by C++ and openMPI
and run them in cluster. On each machine, we only launch one process. The experiments are performed on two large data sets with different number of features, covtype and kdd. Covtype data
has a total of 581, 012 examples and 54 features. Kdd data is a large data used in kdd cup 2010,
which contains 19, 264, 097 training examples and 29, 890, 095 features. For covtype data, we use
522, 911 examples for training. We apply the algorithms to solving two SVM formulations, namely
L2 -SVM with hinge loss square and L1 -SVM with hinge loss, to demonstrate the capabilities of
DisDCA in solving smooth loss functions and Lipschitz continuous loss functions. In the legend of
figures, we use DisDCA-b to denote the basic variant, DisDCA-p to denote the practical variant, and
DisDCA-a to denote the aggressive variant of DisDCA [20].
Tradeoff between Communication and Computation To verify the convergence analysis, we
show in Figures 3(a)?3(b), 3(d)?3(e) the duality gap of the basic variant and the practical variant
of the DisDCA algorithm versus the number of iterations by varying the number of samples m per
iteration, the number of machines K and the values of ?. The results verify the convergence bound
in Theorem 1. At the beginning of increasing the values of m or K, the performances are improved.
However, when their values exceed certain number, the impact of increasing m or K diminishes.
Additionally, the larger the value of ? the wider the effective region of m and K. It is notable that the
effective region of m and K of the practical variant is much larger than that of the basic variant. We
also briefly report a running time result: to obtain an = 10?3 duality gap for optimizing L2 -SVM
on covtype data with ? = 10?3 , the running time of DisDCA-p with m = 1, 10, 102 , 103 by fixing
K = 10 are 30, 4, 0, 5 seconds 3 , respectively, and the running time with K = 1, 5, 10, 20 by fixing
m = 100 are 3, 0, 0, 1 seconds, respectively. The speed-up gain on kdd data by increasing m is even
larger because the communication cost is much higher. In supplement, we present more results on
visualizing the communication and computation tradeoff.
The Practical Variant vs The Basic Variant To further demonstrate the usefulness of the practical
variant, we present a comparison between the practical variant and the basic variant for optimizing
3
0 second means less than 1 second. We exclude the time for computing the duality gap at each iteration.
7
0.5
20
40
60
80
0.9
0.85
0.8
0.75
20
40
60
80
duality gap
duality gap
DisDCA?b (covtype, L2SVM, K=10, ?=10 )
0.7
0
0.5
0
0
100
?6
m=1
m=10
m=102
m=103
m=104
20
0
0
20
0
0
0.8
80
number of iterations (*100)
100
20
30
40
2
1
40
60
100
0.9
DisDCA
ADMM?s (?=100)
Pegasos
0.8
0.7
0.6
0.5
20
40
60
80
100
80
100
number of iterations (*100)
(e) varing K
1
DisDCA?p
ADMM?s (?=10)
SAG
0.8
0.6
0.4
0.2
0
50
3
20
80
kdd, L2SVM, K=10, m=104, ?=10?6
DisDCA?p (covtype, L2SVM, m=103, ?=10?6)
0
0
60
number of iterations (*100)
K=1
K=5
K=10
10
40
(c) Different Algorithms
0.5
duality gap
duality gap
1
(d) varying K
100
1
100
0.9
60
80
20
40
60
80
kdd, L1SVM, K=10, m=104, ?=10?6
100
0.8
primal obj
80
DisDCA?b (covtype, L2SVM, m=102, ?=10?6)
40
60
DisDCA?p (covtype, L2SVM, m=103, ?=10?3)
duality gap
dualtiy gap
0.5
20
40
20
covtype, L1SVM, K=10, m=104, ?=10?6
(b) varying m
K=1
K=5
K=10
0.7
0
0.7
number of iteration (*100)
1
60
0.72
100
0.5
100
DisDCA?b (covtype, L2SVM, m=102, ?=10?3)
40
80
1
(a) varying m
20
60
DisDCA
ADMM?s (?=10)
SAG
0.74
1.5
number of iteration (*100)
0
0
40
DisDCA?p (L2SVM, K=10, ?=10?6)
0.76
primal obj
0
0
primal obj
m=1
m=10
m=102
m=103
m=104
covtype, L2SVM, K=10, m=104, ?=10?6
DisDCA?p (L2SVM, K=10, ?=10?3)
1
primal obj
1
duality gap
duality gap
DisDCA?b (covtype, L2SVM, K=10, ?=10?3)
DisDCA?p
ADMM?s (?=100)
Pegasos
0.6
0.4
0.2
0
20
40
60
80
100
number of iterations (*100)
(f) Different Algorithms
Figure 3: (a,b): duality gap with varying m; (d,e): duality gap with varying K; (c, f) comparison of
different algorithms for optimizing SVMs. More results can be found in supplementary materials.
the two SVM formulations in supplementary material. We also include the performances of the aggressive variant proposed in [20], by applying the aggressive updates on the m sampled examples in
each machine without incurring additional communication cost. The results show that the practical
variant converges much faster than the basic variant and the aggressive variant.
Comparison with other baselines Lastly, we compare DisDCA with SGD-based and ADMMbased distributed algorithms running in the same distributed framework. For optimizing L2 -SVM,
we implement the stochastic average gradient (SAG) algorithm [15], which also enjoys a linear convergence for smooth and strongly convex problems. We use the constant step size (1/Ls ) suggested
by the authors for obtaining a good practical performance, where the Ls denotes the smoothness
parameter of the problem, set to 2R + ? given kxi k22 ? R, ?i. For optimizing L1 -SVM, we compare
to the stochastic Pegasos. For ADMM-based algorithms, we implement a stochastic ADMM in [14]
(ADMM-s) and a deterministic ADMM in [23] (ADMM-dca) that employes the DCA algorithm
? for
solving the subproblems. In the stochastic ADMM, there is a step size parameter ?t ? 1/ t. We
choose the best initial step size among [10?3 , 103 ]. We run all algorithms on K = 10 machines and
set m = 104 , ? = 10?6 for all stochastic algorithms. In terms of the parameter ? in ADMM, we find
that ? = 10?6 yields good performances by searching over a range of values. We compare DisDCA
with SAG, Pegasos and ADMM-s in Figures 3(c), 3(f) 4 , which clearly demonstrate that DisDCA is
a strong competitor in optimizing SVMs. In supplement we compare DisDCA by setting m = nk
against ADMM-dca with four different values of ? = 10?6 , 10?4 , 10?2 , 1 on kdd. The results show
that the performances deteriorate significantly if the ? is not appropriately set, while DisDCA can
produce comparable performance without additional efforts in tuning the parameter.
5
Conclusions
We have presented a distributed stochastic dual coordinate descent algorithm and its convergence
rates, and analyzed the tradeoff between computation and communication. The practical variant has
substantial improvements over the basic variant and other variants. We also make a comparison with
other distributed algorithms and observe competitive performances.
4
The primal objective of Pegasos on covtype is above the display range.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In CDC, pages
5451?5452, 2012.
[2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical
learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3:1?
122, 2011.
[3] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel Coordinate Descent for L1Regularized Loss Minimization. In ICML, 2011.
[4] C. T. Chu, S. K. Kim, Y. A. Lin, Y. Yu, G. R. Bradski, A. Y. Ng, and K. Olukotun. Map-Reduce
for machine learning on multicore. In NIPS, pages 281?288, 2006.
[5] W. Deng and W. Yin. On the global and linear convergence of the generalized alternating
direction method of multipliers. Technical report, 2012.
[6] M. Eberts and I. Steinwart. Optimal learning rates for least squares svms using gaussian kernels. In NIPS, pages 1539?1547, 2011.
[7] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems
via finite element approximation. Comput. Math. Appl., 2:17?40, 1976.
[8] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate
descent method for large-scale linear svm. In ICML, pages 408?415, 2008.
[9] H. D. III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Protocols for learning classifiers
on distributed data. JMLR- Proceedings Track, 22:282?290, 2012.
[10] S. Lacoste-Julien, M. Jaggi, M. W. Schmidt, and P. Pletscher. Stochastic block-coordinate
frank-wolfe optimization for structural svms. CoRR, abs/1207.4747, 2012.
[11] J. Langford, A. Smola, and M. Zinkevich. Slow learners are fast. In NIPS, pages 2331?2339.
2009.
[12] Z. Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convex
differentiable minimization. Journal of Optimization Theory and Applications, pages 7?35,
1992.
[13] G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker. Efficient Large-Scale distributed training of conditional maximum entropy models. In NIPS, pages 1231?1239. 2009.
[14] H. Ouyang, N. He, L. Tran, and A. G. Gray. Stochastic alternating direction method of multipliers. In ICML, pages 80?88, 2013.
[15] N. L. Roux, M. W. Schmidt, and F. Bach. A stochastic gradient method with an exponential
convergence rate for finite training sets. In NIPS, pages 2672?2680, 2012.
[16] S. Shalev-Shwartz and T. Zhang. Stochastic Dual Coordinate Ascent Methods for Regularized
Loss Minimization. JMLR, 2013.
[17] S. Smale and D.-X. Zhou. Estimating the approximation error in learning theory. Anal. Appl.
(Singap.), 1(1):17?41, 2003.
[18] K. Sridharan, S. Shalev-Shwartz, and N. Srebro. Fast rates for regularized objectives. In NIPS,
pages 1545?1552, 2008.
[19] T. Suzuki. Dual averaging and proximal gradient descent for online alternating direction multiplier method. In ICML, pages 392?400, 2013.
[20] M. Tak?ac, A. S. Bijral, P. Richt?arik, and N. Srebro. Mini-batch primal and dual methods for
svms. In ICML, 2013.
[21] C. H. Teo, S. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk
minimization. JMLR, pages 311?365, 2010.
[22] K. I. Tsianos, S. Lawlor, and M. G. Rabbat. Communication/computation tradeoffs in
consensus-based distributed optimization. In NIPS, pages 1952?1960, 2012.
[23] C. Zhang, H. Lee, and K. G. Shin. Efficient distributed linear classification algorithms via the
alternating direction method of multipliers. In AISTAT, pages 1398?1406, 2012.
[24] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In
NIPS, pages 2595?2603, 2010.
9
| 5114 |@word briefly:1 version:1 norm:7 open:1 hsieh:1 pressure:1 sgd:9 pick:2 mention:1 venkatasubramanian:1 initial:2 contains:1 past:1 bradley:1 com:2 comparing:4 luo:1 chu:2 xmk:1 kdd:7 update:10 bickson:1 v:4 selected:2 xk:8 beginning:2 core:4 provides:1 math:1 node:4 zhang:3 along:1 constructed:2 become:2 deteriorate:1 theoretically:1 multi:2 rem:1 cpu:1 increasing:10 provided:2 estimating:1 moreover:1 notation:2 what:2 ouyang:1 substantially:1 developed:1 guarantee:2 pseudo:1 act:1 growth:1 shed:1 sag:4 exactly:2 k2:5 classifier:1 uk:1 enjoy:1 local:7 mach:1 subscript:1 path:1 abuse:1 initialization:2 studied:1 collect:2 suggests:1 appl:2 employes:1 ease:1 range:2 practical:26 unique:2 practice:1 block:1 implement:4 communicated:1 procedure:3 shin:1 sdca:12 empirical:3 significantly:2 boyd:1 pegasos:6 cannot:1 risk:1 applying:3 optimize:3 equivalent:1 deterministic:2 lagrangian:1 map:1 maximizing:2 zinkevich:2 tianbao:1 attention:1 l:2 convex:10 simplicity:1 roux:1 insight:1 l1regularized:1 handle:1 searching:1 coordinate:15 updated:9 pt:2 user:1 us:2 trick:1 trend:1 element:1 wolfe:1 particularly:1 updating:8 observed:3 subproblem:5 worst:1 calculate:1 region:8 richt:1 decrease:3 yk:2 substantial:1 intuition:1 environment:2 deploys:1 motivate:2 solving:15 learner:1 k0:1 america:1 derivation:2 fast:2 effective:6 shalev:3 whose:1 supplementary:5 solve:2 larger:4 emergence:1 validates:1 online:2 advantage:1 differentiable:3 unprecedented:1 net:1 subtracting:2 tran:1 loop:2 date:1 x1m:1 alleviates:1 aistat:1 billion:1 convergence:24 cluster:2 extending:1 produce:1 converges:1 wider:2 depending:1 develop:1 ac:2 fixing:3 derive:1 exemplar:1 multicore:1 ij:9 progress:1 solves:1 strong:3 implemented:2 launch:1 involves:2 trading:1 synchronized:2 direction:8 safe:1 stochastic:28 material:5 mann:1 explains:1 suffices:2 fix:2 generalization:2 preliminary:1 considered:1 diminish:1 exp:1 great:1 scope:1 algorithmic:1 substituting:1 adopt:1 omitted:1 diminishes:1 label:1 bridge:2 teo:1 vice:3 trusted:1 minimization:8 brought:1 clearly:1 gaussian:1 arik:1 aim:4 pn:1 zhou:1 varying:6 derived:1 focus:1 vk:1 improvement:4 kyrola:1 l1svm:2 contrast:1 rigorous:1 baseline:1 kim:1 i0:1 tak:2 kn1:1 interested:1 mitigating:1 arg:1 issue:2 dual:38 among:4 x11:1 denoted:1 classification:1 constrained:1 initialize:1 ng:1 kw:1 yu:1 icml:5 future:1 others:1 report:2 simplify:1 saha:1 randomly:5 simultaneously:2 individual:2 delayed:1 eberts:1 keerthi:1 n1:3 maintain:1 ab:1 bradski:1 adjust:1 analyzed:1 light:1 primal:12 behind:1 bundle:1 accurate:2 kt:1 conduct:1 theoretical:2 mk:23 instance:1 earlier:1 lawlor:1 bijral:1 cost:20 predictor:2 usefulness:1 delay:1 hundred:1 kxi:3 proximal:1 gd:4 adaptively:1 lee:1 w1:2 again:1 broadcast:1 choose:1 style:1 li:1 account:2 aggressive:6 potential:1 exclude:1 blow:1 star:2 wk:3 notable:3 caused:1 explicitly:1 depends:2 piece:1 performed:1 view:2 lab:2 closed:1 analyze:4 characterizes:1 competitive:2 start:1 option:6 parallel:5 capability:2 contribution:1 square:4 sw1:1 characteristic:2 efficiently:1 yield:3 worth:1 published:1 explain:1 definition:2 competitor:1 against:1 naturally:1 associated:4 proof:1 sampled:6 gain:1 popular:1 manifest:1 subsection:2 recall:1 ut:3 routine:4 dca:8 higher:1 supervised:1 improved:2 formulation:2 though:2 strongly:5 generality:1 smola:3 lastly:1 langford:1 steinwart:1 replacing:1 nonlinear:1 lack:3 mkl:1 incrementally:1 logistic:1 tsianos:1 indicated:1 gray:1 effect:1 k22:3 verify:7 multiplier:8 regularization:4 equality:1 alternating:8 read:1 visualizing:1 interchangeably:1 mpi:3 generalized:1 presenting:2 demonstrate:4 mcdonald:1 duchi:1 l1:5 pro:1 variational:1 invoked:2 recently:2 parikh:1 superior:2 mt:1 million:1 cupertino:1 extend:1 he:1 sundararajan:1 refer:3 versa:3 cup:1 phillips:1 smoothness:1 rd:5 tuning:1 pm:6 similarly:3 kw1:1 access:1 add:1 jaggi:1 recent:4 optimizing:8 certain:1 kwk1:1 ubiquitously:1 yi:7 exploited:1 seen:1 guestrin:1 greater:2 additional:2 mr:5 employed:1 utk:1 deng:1 parallelized:1 maximize:2 ii:4 u0:1 multiple:2 reduces:1 smooth:12 technical:1 faster:1 adapt:1 bach:1 lin:2 crease:1 impact:1 variant:44 basic:13 involving:2 essentially:1 iteration:28 sometimes:1 kernel:1 agarwal:1 want:1 wkj:4 walker:1 sends:1 appropriately:1 w2:4 unlike:1 posse:1 ascent:10 wkt:1 kwk22:3 pass:1 allreduce:1 leveraging:2 legend:1 effectiveness:1 obj:4 call:1 sridharan:1 structural:1 yang:1 exceed:1 iii:2 easy:1 iterate:3 affect:1 rabbat:1 reduce:5 idea:2 inner:1 tradeoff:18 t0:5 expression:1 motivated:1 vishwanthan:1 shotgun:1 effort:5 penalty:6 suffer:1 remark:3 detailed:3 l2svm:10 extensively:1 ten:1 wk0:1 svms:6 simplest:1 reduced:2 outperform:1 xij:1 arising:2 per:5 track:1 affected:1 key:1 four:1 achieving:1 verified:1 utilize:1 lacoste:1 olukotun:1 year:2 run:2 master:1 communicate:1 place:2 throughout:1 comparable:2 bound:13 followed:1 display:1 quadratic:1 calling:3 dominated:3 aspect:1 speed:4 argument:1 min:3 relatively:2 speedup:1 influential:1 developing:1 according:1 conjugate:2 across:2 smaller:3 slightly:1 lem:1 intuitively:2 resource:1 remains:1 discus:1 differentiates:1 needed:2 scl:5 end:2 mercier:1 studying:3 incurring:1 apply:1 observe:2 generic:1 spectral:4 batch:5 schmidt:2 slower:1 original:1 denotes:4 running:8 include:4 hinge:6 newton:1 invokes:1 especially:1 establish:2 uj:2 silberman:1 objective:9 strategy:1 gradient:11 accommodates:1 gence:1 outer:2 w0:1 evenly:1 extent:1 consensus:2 tseng:1 assuming:1 code:1 mini:5 difficult:1 tyang:1 subproblems:2 frank:1 smale:1 suppress:1 implementation:1 anal:1 upper:1 observation:1 finite:2 descent:11 communication:31 rn:1 arbitrary:1 peleato:1 pair:1 namely:1 specified:1 eckstein:1 optimized:1 connection:1 established:1 nip:8 beyond:1 suggested:2 below:4 usually:1 ujt:2 regime:1 summarize:1 max:6 power:1 regularized:6 pletscher:1 mn:3 scheme:1 improve:1 julien:1 ready:1 parallell:1 review:1 sg:1 mapreduce:1 l2:11 synchronization:1 loss:30 cdc:1 interesting:2 proportional:3 srebro:2 versus:2 viewpoint:1 mohri:1 last:1 copy:2 free:1 enjoys:3 u0t:1 understand:1 absolute:2 sparse:1 distributed:42 dimension:1 xn:1 author:5 made:1 commonly:1 suzuki:1 employing:1 ignore:1 global:7 reveals:1 conclude:1 xi:13 shwartz:3 continuous:8 decade:1 sk:2 additionally:1 learn:1 zk:5 ca:1 elastic:1 decoupling:1 obtaining:1 meanwhile:1 protocol:1 pk:1 weimer:1 motivation:1 gabay:1 x1:1 fashion:6 deployed:1 openmpi:1 slow:1 exponential:1 comput:1 jmlr:3 varing:1 theorem:5 load:1 svm:13 covtype:13 sequential:3 corr:1 supplement:2 nec:2 nk:14 gap:18 entropy:1 yin:1 simply:1 kxk:2 scalar:4 chang:1 portional:1 inexactly:1 conditional:1 goal:2 presentation:1 narrower:1 lipschitz:7 admm:37 xkm:1 reducing:3 rlm:4 wt:14 averaging:1 total:1 pas:2 duality:15 experimental:1 |
4,548 | 5,115 | Locally Adaptive Bayesian Multivariate Time Series
Bruno Scarpa
Department of Statistical Sciences
University of Padua
Via Cesare Battisti 241, 35121, Padua, Italy
[email protected]
Daniele Durante
Department of Statistical Sciences
University of Padua
Via Cesare Battisti 241, 35121, Padua, Italy
[email protected]
David B. Dunson
Department of Statistical Science
Duke University
Durham, NC 27708-0251, USA
[email protected]
Abstract
In modeling multivariate time series, it is important to allow time-varying smoothness in the mean and covariance process. In particular, there may be certain time
intervals exhibiting rapid changes and others in which changes are slow. If such
locally adaptive smoothness is not accounted for, one can obtain misleading inferences and predictions, with over-smoothing across erratic time intervals and
under-smoothing across times exhibiting slow variation. This can lead to miscalibration of predictive intervals, which can be substantially too narrow or wide
depending on the time. We propose a continuous multivariate stochastic process
for time series having locally varying smoothness in both the mean and covariance matrix. This process is constructed utilizing latent dictionary functions in
time, which are given nested Gaussian process priors and linearly related to the
observed data through a sparse mapping. Using a differential equation representation, we bypass usual computational bottlenecks in obtaining MCMC and online
algorithms for approximate Bayesian inference. The performance is assessed in
simulations and illustrated in a financial application.
1
1.1
Introduction
Motivation and background
In analyzing multivariate time series data, collected in financial applications, monitoring of influenza
outbreaks and other fields, it is often of key importance to accurately characterize dynamic changes
over time in not only the mean of the different elements (e.g., assets, influenza levels at different
locations) but also the covariance. It is typical in many domains to cycle irregularly between periods of rapid and slow change; most statistical models are insufficiently flexible to capture such
locally varying smoothness in assuming a single bandwidth parameter. Inappropriately restricting
the smoothness to be constant can have a major impact on the quality of inferences and predictions,
with over-smoothing occurring during times of rapid change. This leads to an under-estimation of
uncertainty during such volatile times and an inability to accurately predict risk of extremal events.
There is a rich literature on modeling a p ? 1 time-varying mean vector ?t , covering multivariate
generalizations of autoregressive models (VAR, e.g. [1]), Kalman filtering [2], nonparametric mean
regression via Gaussian processes (GP) [3], polynomial spline [4], smoothing spline [5] and Kernel smoothing methods [6]. Such approaches perform well for slowly-changing trajectories with
1
constant bandwidth parameters regulating implicitly or explicitly global smoothness; however, our
interest is allowing smoothness to vary locally in continuous time. Possible extensions for local
adaptivity include free knot splines (MARS) [7], which perform well in simulations but the different strategies proposed to select the number and the locations of knots (stepwise knot selection
[7], Bayesian knot selection [8] or via MCMC methods [9]) prove to be computationally intractable
for moderately large p. Other flexible approaches include wavelet shrinkage [10], local polynomial
fitting via variable bandwidth [11] and linear combination of kernels with variable bandwidths [12].
Once ?t has been estimated, the focus shifts to the p ? p time-varying covariance matrix ?t . This is
particular of interest in applications where volatilities and co-volatilities evolve through non constant
paths. Multivariate generalizations of GARCH models (DVEC [13], BEKK [14], DCC-GARCH
[15]), exponential smoothing (EWMA, e.g. [1]) and approaches based on dimensionality reduction
through a latent factor formulation (PC-GARCH [16] and O-GARCH [17]-[18]) represent common
approaches in multivariate stochastic volatility modeling. Although widely used in practice, such
approaches suffer from tractability issues arising from richly parameterized formulations (DVEC
and BEKK), and lack of flexibility resulting from the adoption of single time-constant bandwidth
parameters (EWMA), time-constant factor loadings and uncorrelated latent factors (PC-GARCH,
O-GARCH) as well as the use of the same parameters regulating the evolution of the time varying
conditional correlations (DCC-GARCH). Such models fall far short of our goal of allowing ?t to
be fully flexible with the dependence between ?t and ?t+? varying with not just the time-lag
? but also with time. In addition, these models do not handle missing data easily and tend to
require long series for accurate estimation [16]. Bayesian dynamic factor models for multivariate
stochastic volatility [19] lead to apparently improved performance in portfolio allocation by allowing
the dependence in the covariance matrices ?t and ?t+? to vary as a function of both t and ?.
However, the result is an extremely richly parameterized and computationally challenging model,
with selection of the number of factors via cross validation. Our aim is instead on developing
continuous time stochastic processes for ?(t) and ?(t) with locally-varying smoothness.
Wilson and Ghahramani [20] join machine learning and econometrics efforts by proposing a model
for both mean and covariance regression in multivariate time series, improving previous work of
Bru [21] on Wishart Processes in terms of computational tractability and scalability, allowing more
complex structure of dependence between ?(t) and ?(t + ?). Specifically, they propose a continuous time Generalised Wishart Process (GWP), which defines a collection of positive semi-definite
random matrices ?(t) with Wishart marginals. Nonparametric mean regression for ?(t) is also considered via GP priors; however, the trajectories of means and covariances inherit the smooth behavior of the underlying Gaussian processes, limiting the flexibility of the approach in times exhibiting
sharp changes.
Fox and Dunson [22] propose an alternative Bayesian covariance regression (BCR) model, which
defines the covariance matrix of a vector of p variables at time ti , as a regularized quadratic function
of time-varying loadings in a latent factor model, characterizing the latter as a sparse combination
of a collection of unknown Gaussian process dictionary functions. More specifically given a set of
p ? 1 vector of observations yi ? Np (?(ti ), ?(ti )) where i = 1, ..., T indexes time, they define
cov(yi |ti = t) = ?(t) = ??(t)?(t)T ?T + ?0 ,
t ? T ? <+ ,
(1)
where ? is a p ? L matrix of coefficients, ?(t) is a time-varying L ? K matrix with unknown
continuous dictionary functions entries ?lk : T ? <, and finally ?0 is a positive definite diagonal
matrix. Model (1) can be induced by marginalizing out the latent factors ?i in
yi = ??(ti )?i + i ,
(2)
with ?i ? NK (0, IK ) and i ? Np (0, ?0 ). A generalization includes a nonparametric mean regression by assuming ?i = ?(ti ) + ?i , where ?i ? NK (0, IK ) and ?(t) is a K ? 1 matrix with unknown
continuous entries ?k : T ? < that can be modeled in a related manner to the dictionary elements
in ?(t). The induced mean of yi conditionally on ti = t, and marginalizing out ?i is then
E(yi |ti = t) = ?(t) = ??(t)?(t).
1.2
(3)
Our modeling contribution
We follow the lead of [22] in using a nonparametric latent factor model as in (2), but induce fundamentally different behavior by carefully modifying the priors ?? and ?? for the dictionary elements
2
?T = {?(t), t ? T }, and ?T = {?(t), t ? T } respectively. We additionally develop a different and
much more computationally efficient approach to computation under this new model.
Fox and Dunson [22] consider the dictionary functions ?lk and ?k , for each l = 1, ..., L and
k = 1, ..., K, as independent Gaussian Processes GP(0, c) with c the squared exponential correlation function having c(x, x0 ) = exp(?k||x ? x0 ||22 ). This approach provides a continuous time
and flexible model that accommodates missing data and scales to moderately large p, but the proposed priors for the dictionary functions assume a stationary dependence structure and hence induce
prior distributions ?? and ?? on ?T and ?T through (1) and (3) that tend to under-smooth during
periods of stability and over-smooth during periods of sharp changes. Moreover the well known
computational problems with usual GP regression are inherited, leading to difficulties in scaling to
long series and issues in mixing of MCMC algorithms for posterior computation.
In our work, we address these problems to develop a novel mean-covariance stochastic process with
locally-varying smoothness by replacing GP priors for ?T = {?(t), t ? T }, and ?T = {?(t), t ?
T } with nested Gaussian process (nGP) priors [23], with the goal of maintaining simple computation
and allowing both covariances and means to vary flexibly over continuous time. The nGP provides
a highly flexible prior on the dictionary functions whose smoothness, explicitly modeled by their
derivatives via stochastic differential equations, is expected to be centered on a local instantaneous
mean function, which represents an higher-level Gaussian Process, that induces adaptivity to locallyvarying smoothing.
Restricting our attention on the elements of the prior ?? (the same holds for ?? ), the Markovian
property implied by the stochastic differential equations allows a simple state space formulation of
0
nGP in which the prior for ?lk along with its first order derivative ?lk
and the locally instantaneous
0
(t)|Alk (t)] follow the approximated state equation
mean Alk (t) = E[?lk
"
?lk (ti+1 )
0
?lk
(ti+1 )
Alk (ti+1 )
#
"
=
1
0
0
?i
1
0
0
?i
1
#"
?lk (ti )
0
?lk
(ti )
Alk (ti )
#
"
+
0
1
0
0
0
1
#
?i,?lk
?i,Alk
,
(4)
2
? ) and ?i = ti+1 ? ti . This
where [?i,?lk , ?i,Alk ]T ? N2 (0, Vi,lk ), with Vi,lk = diag(??2lk ?i , ?A
lk i
formulation allows continuous time and an irregular grid of observations over t by relating the
latent states at i + 1 to those at i through the distance ?i between ti+1 and ti , with ti ? T the
time observation related to the ith observation. Moreover, compared to [23] our approach extends
the analysis to the multivariate case and accommodates locally adaptive smoothing not only on
the mean but also on the time-varying variance and covariance functions. Finally, the state space
formulation allows the implementation of an online updating algorithm and facilitates the definition
of a simple Gibbs sampling which reduces the GP computational burden involving matrix inversions
from O(T 3 ) to O(T ), with T denoting the length of the time series.
1.3
Bayesian inference and online learning
For fixed truncation levels L? and K ? , the algorithm for posterior computation alternates between
a simple and efficient simulation smoother step [24] to update the state space formulation of the
nGP, and standard Gibbs sampling steps for updating the parametric components of the model.
Specifically, considering the observations (yi , ti ) for i = 1, ..., T :
A. Given ? and {?i }Ti=1 , a multivariate version of the MCMC algorithm proposed by Zhu and Dunson [23] draws posterior samples from each dictionary element?s function {?lk (ti )}Ti=1 , its
0
first order derivative {?lk
(ti )}Ti=1 , the corresponding instantaneous mean {Alk (ti )}Ti=1 , the
2
variances in the state equations ??2lk , ?A
(for which inverse Gamma priors are assumed)
lk
and the variances of the error terms in the observation equation ?j2 with j = 1, ..., p.
B. If the mean process needs not to be estimated, recalling the prior ?i ? NK ? (0, IK ? ) and model
(2), the standard conjugate posterior distribution from which to sample the vector of latent
factors for each i given ?, {?j?2 }pj=1 , {yi }Ti=1 and {?(ti )}Ti=1 is Gaussian.
Otherwise, if we want to incorporate the mean regression, we implement a block sampling
of {?(ti )}Ti=1 and {?i }Ti=1 following a similar approach used for drawing samples from
the dictionary elements process.
3
4
2
0
-250
-4
50
-200
-2
-150
100
-100
150
-50
0
?2,2 (ti )
20
40
60
80
100
0
20
?5 (ti )
-6
?1,3 (ti )
-300
0
0
40
80
100
0
20
40
Time
60
80
100
Time
1.0
5
Time
60
0.8
?5 (ti )
0.6
1.5
?10,3 (ti )
0
20
40
60
80
100
-0.4
-0.5
-0.2
1
0.0
0.0
2
0.2
0.5
0.4
3
1.0
4
?9,9 (ti )
0
20
40
60
80
100
0
20
40
60
80
100
Figure 1: For locally varying smoothness simulation (top) and smooth simulation (bottom), plots of
truth (black) and posterior mean respectively of LBCR (solid red line) and BCR (solid green line) for
selected components of the variance (left), covariance (middle), mean (right). For both approaches
the dotted lines represent the 95% highest posterior density intervals.
C. Finally, conditioned on {yi }Ti=1 , {?i }Ti=1 , {?j?2 }pj=1 and {?(ti )}Ti=1 , and recalling the shrinkage
prior for the elements of ? defined in [22], we update ?, each local shrinkage hyperparameter ?jl and the global shrinkage hyperparameters ?l via standard conjugate analysis.
The problem of online updating represents a key point in multivariate time series with high frequency
data. Referring to our formulation, we are interested in updating an approximated posterior distri+H
bution for ?(tT +h ) and ?(tT +h ) with h = 1, ..., H once a new vector of observations {yi }Ti=T
+1 is
available, instead of rerunning posterior computation for the whole time series.
Since as T increases the posterior for the time-stationary parameters rapidly becomes concentrated,
2
2
? ?
? 0, ?
we fix these parameters at estimates (?,
??2lk , ?
?A
, ?
??2 k ?
?B
) and dynamically update the
lk
k
dictionary functions alternating between steps A and B for the new set of observations. To initialize
+H
the algorithm at T + 1 we propose to run the online updating for {yi }Ti=T
?k , with k small, and
choosing a diffuse but proper prior for the initial states at T ?k. Such approach is suggested to reduce
the problem related to the larger conditional variances (see, e.g. [25]) of the latent states at the end
of the sample (i.e. at T ), which may affect the initial distributions in T + 1. The online algorithm is
also efficient in exploiting the advantages of the state space formulation for the dictionary functions,
requiring matrix inversion computations of order depending only on the length of the additional
sequence H and on the number of the last observations k used to initialize the algorithm.
2
Simulation studies
The aim of the following simulation studies is to compare the performance of our proposal (LBCR,
locally adaptive Bayesian covariance regression) with respect to BCR, and to the models for multivariate stochastic volatility most widely used in practice, specifically: EWMA, PC-GARCH, GOGARCH and DCC-GARCH. In order to assess whether and to what extent LBCR can accommodate,
in practice, even sharp changes in the time-varying covariances and means, and to evaluate the costs
associated to our flexible approach in settings where the mean and covariance functions do not require locally adaptive estimation tecniques, we will focus on two different sets of simulated data.
The first dataset consists in 5-dimensional observations yi for each ti ? To = {1, 2, ..., 100}, from
the latent factor model in (2) with ?(t) defined as in (1). To allow sharp changes of the covariances
and means in the generating mechanism, we consider a 2 ? 2 (i.e. L = K = 2) matrix {?(ti )}100
i=1
of time-varying functions adapted from Donoho and Johnstone [26] with locally-varying smoothness (more specifically we choose ?bumps? functions also to mimic possible behavior in practical
settings). The second set of simulated data is the same dataset of 10-dimensional observations yi
4
Table 1: Summaries of the standardized squared errors.
Locally varying smoothness
mean
q0.9
q0.95
max
covariance ?(ti )
Constant smoothness
mean
q0.9
q0.95
max
covariance ?(ti )
EWMA
PC-GARCH
GO-GARCH
DCC-GARCH
BCR
LBCR
1.37
1.75
2.40
1.75
1.80
0.90
2.28
5.49
2.49
6.48
3.66 10.32
2.21
6.95
2.25
7.32
1.99
4.52
mean ?(ti )
85.86
229.50
173.41
226.47
142.26
36.95
0.030
0.018
0.043
0.022
0.009
0.009
0.081 0.133
0.048 0.076
0.104 0.202
0.057 0.110
0.019 0.039
0.022 0.044
mean ?(ti )
1.119
0.652
1.192
0.466
0.311
0.474
SMOOTH SPLINE
BCR
LBCR
0.064
0.087
0.062
0.128
0.185
0.123
2.595
2.845
2.529
0.007
0.005
0.005
0.019
0.015
0.017
0.077
0.038
0.050
0.186
0.379
0.224
0.027
0.024
0.026
investigated in Fox and Dunson [22], with smooth GP dictionary functions for each element of the
5 ? 4 (i.e. L = 5, K = 4) matrices {?(ti )}100
i=1 .
Posterior computation, both for LBCR and BCR, is performed by assuming diffuse but proper priors
and by using truncation levels L? = K ? = 2 for the first dataset and L? = 5, K ? = 4 for the second
(at higher levels settings we found that the shrinkage prior on ? results in posterior samples of
the elements in the adding columns concentrated around 0). For the first dataset we run 50,000
Gibbs iterations with a burn-in of 20,000 and tinning every 5 samples, while for the second one we
followed Fox and Dunson [22] by considering 10,000 Gibbs iterations which proved to be enough to
reach convergence, and discarded the first 5,000 as burn-in. In the first set of simulated data, given
the substantial independence between samples after thinning the chain, we analyzed mixing by the
Gelman-Rubin procedure [27], based on potential scale reduction factors computed for each chain
by splitting the sampled quantities in 6 pieces of same length. The analysis shows more problematic
mixing for BCR with respect of LBCR. Specifically, in LBCR the 95% of the chains have a potential
reduction factor lower than 1.35, with a median equal to 1.11, while in BCR the 95th quantile is 1.44
and the median equals to 1.18. Less problematic is the mixing for the second set of simulated data,
with potential scale reduction factors having median equal to 1.05 for both approaches and 95th
quantiles equal to 1.15 and 1.31 for LBCR and BCR, respectively.
As regards the other approaches, EWMA has been implemented by choosing the smoothing parameter ? that minimizes the mean squared error (MSE) between the estimated covariances and the
true values. PC-GARCH algorithm follows the steps provided by Burns [16] with GARCH(1,1)
assumed for the conditional volatilities of each single time series and the principal components.
GO-GARCH and DCC-GARCH recall the formulations provided by van der Weide [18] and Engle
[15] respectively, assuming a GARCH(1,1) for the conditional variances of the processes analyzed,
which proves to be a correct choice in many financial applications and also in our setting. Differently
from LBCR and BCR, the previous approaches do not model explicitly the mean process {?(ti )}100
i=1
but work directly on the innovations {yi ? ?
?(ti )}100
i=1 . Therefore in these cases we first model the conditional mean via smoothing spline and in a second step we estimate the models for the innovations.
The smoothing parameter for spline estimation has been set to 0.7, which was found to be appropriate to reproduce the true dynamic of {?(ti )}100
i=1 . Figure 1 compares, in both simulated samples, true
and posterior mean of ?(t) and ?(t) over the predictor space To together with the point-wise 95%
highest posterior density (hpd) intervals for LBCR and BCR. From the upper plots we can clearly
note that our approach is able to capture conditional heteroscedasticity as well as mean patterns,
also in correspondence of sharp changes in the time-varying true functions. The major differences
compared to the true values can be found at the beginning and at the end of the series and are likely
to be related to the structure of the simulation smoother which causes a widening of the credibility
bands at the very end of the series, for references see Durbin and Koopman [25]. However, even
in the most problematic cases, the true values are within the bands of the 95% hpd intervals. Much
more problematic is the behavior of the posterior distributions for BCR which badly over-smooth
5
0.006
ITALY FTSE MIB
0.000
0.000
0.002
0.002
0.004
0.004
0.006
USA NASDAQ
2004-07-19
2007-09-21
2010-11-23
2004-07-19
2007-09-21
2010-11-23
Figure 2: For 2 NSI posterior mean (black) and 95% hpd (dotted red) for the variances {?jj (ti )}415
i=1 .
both covariance and mean functions leading also to many 95% hpd intervals not containing the true
values. Bottom plots in Figure 1 show that the performance of our approach is very close to that
of BCR, when data are simulated from a model where the covariances and means evolve smoothly
across time and local adaptivity is not required. This happens even if the hyperparameters are set in
order to maintain separation between nGP and GP prior, suggesting large support for LBCR.
The comparison of the summaries of the squared errors between true values {?(ti )}100
i=1 and
100
100
?
{?(ti )}100
and
estimated
quantities
{?
?
(t
)}
and
{
?(t
)}
standardized
with
the
range
of the
i i=1
i i=1
i=1
true underlying processes r? = maxi,j {?j (ti )} ? mini,j {?j (ti )} and r? = maxi,j,k {?j,k (ti )} ?
mini,j,k {?j,k (ti )} respectively, once again confirms the overall better performance of our approach
with respect to all the considered competitors. Table 1 shows that, when local adaptivity is required,
LBCR provides a superior performance having standardized residuals lower than those of the other
approaches. EWMA seems to provide quite accurate estimates, however it is important to underline
that we choose the optimal smoothing parameter ? in order to minimize the MSE between estimated
and true parameters, which are clearly not known in practical applications. Different values of ?
reduces significantly the performace of EWMA, which shows also lack of robustness. The closeness of LBCR and BCR in the constant smoothness dataset confirms the flexibility of LBCR and
highlights the better performance of the two approaches with respect to the other competitors also
when smooth processes are investigated.
3
Application to National Stock Market Indices (NSI)
National Stock Indices represent technical tools that allow, through the synthesis of numerous data
on the evolution of the various stocks, to detect underlying trends in the financial market, with
reference to a specific basis of currency and time. In this application we focus our attention on
the multivariate weekly time series of the main 33 (i.e. p = 33) National Stock Indices from
12/07/2004 to 25/06/2012 downloaded from http://finance.yahoo.com.
We consider the heteroscedastic model for the log returns yi ? N33 (?(ti ), ?(ti )) for i = 1, ..., 415
and ti in the discrete set To = {1, 2, ..., 415}, where ?(ti ) and ?(ti ) are given in (3) and (1),
respectively. Posterior computation is performed by using the same settings of the first simulation
study and fixing K ? = 4 and L? = 5 (which we found to be sufficiently large from the fact that the
posterior samples of the last few columns of ? assumed values close to 0). Missing values in our
dataset do not represent a limitation since the Bayesian approach allows us to update our posterior
considering solely the observed data. We run 10,000 Gibbs iterations with a burn-in of 2,500.
415
Examination of trace plots for {?(ti )}415
i=1 and {?(ti )}t=1 showed no evidence against convergence.
Posterior distributions for the variances in Figure 2 show that we are clearly able to capture the
rapid changes in the dynamics of volatilities that occur during the world financial crisis of 2008,
in early 2010 with the Greek debt crisis and in the summer of 2011 with the financial speculation
in government bonds of European countries together with the rejection of the U.S. budget and the
downgrading of the United States rating. Similar conclusions hold for the posterior distributions of
the trajectories of the means, with rapid changes detected in correspondence of the world financial
crisis in 2008.
6
0.8
0.6
BCR
0.2
0.2
0.4
0.4
0.6
0.8
LBCR
B
C
D
E
F
G
A
B
C
D
E
F
G
0.0
A
2004-07-19
2006-04-10
2007-12-31
2009-09-21
2011-06-13
2004-07-19
2006-04-10
2007-12-31
2009-09-21
2011-06-13
Figure 3: Black line: For USA NASDAQ median of correlations with the other 32 NSI based on
posterior mean of {?(ti )}415
i=1 . Red lines: 25%, 75% (dotted lines) and 50% (solid line) quantiles
of correlations between USA NASDAQ and European countries (without considering Greece and
Russia). Green lines: 25%, 75% (dotted lines) and 50% (solid line) quantiles of correlations between
USA NASDAQ and the countries of Southeast Asia (Asian Tigers and India).
From the correlations between NASDAQ and the other National Stock Indices (based on the pos? i )}415 of the covariances function) in Figure 3, we can immediately notice the
terior mean {?(t
i=1
presence of a clear geo-economic structure in world financial markets (more evident in LBCR than
in BCR), where the dependence between the U.S. and European countries is systematically higher
than that of South East Asian Nations (Economic Tigers), showing also different reactions to crises.
The flexibility of the proposed approach and the possibility of accommodating varying smoothness
in the trajectories over time, allow us to obtain a good characterization of the dynamic dependence
structure according with the major theories on financial crisis. Left plot in Figure 3 shows how the
change of regime in correlations occurs exactly in correspondence to the burst of the U.S. housing
bubble (A), in the second half of 2006. Moreover we can immediately notice that the correlations
among financial markets increase significantly during the crises, showing a clear international financial contagion effect in agreement with other theories on financial crises. As expected the persistence
of high levels of correlation is evident during the global financial crisis between late-2008 and end2009 (C), at the beginning of which our approach also captures a dramatic change in the correlations
between the U.S. and Economic Tigers, which lead to levels close to those of Europe. Further rapid
changes are identified in correspondence of Greek crisis (D), the worsening of European sovereigndebt crisis and the rejection of the U.S. budget (F) and the recent crisis of credit institutions in Spain
together with the growing financial instability in Eurozone (G). Finally, even in the period of U.S.
financial reform launched by Barack Obama and EU efforts to save Greece (E), we can notice two
peaks representing respectively Irish debt crisis and Portugal debt crisis. BCR, as expected, tends
to over-smooth the dynamic dependence structure during the financial crisis, proving to be not able
to model the sharp change in the correlations between USA NASDAQ and Economic Tigers during
late-2008, and the two peaks in (E) at the beginning of 2011.
The possibility to quickly update the estimates and the predictions as soon as new data arrive, represents a crucial aspect to obtain quantitative informations about the future scenarios of the crisis
in financial markets. To answer this goal, we apply the proposed online updating algorithm to the
new set of weekly observations {yi }422
i=416 from 02/07/2012 to 13/08/2012 conditioning on posterior estimates of the Gibbs sampler based on observations {yi }415
i=1 available up to 25/06/2012.
We initialized the simulation smoother algorithm with the last 8 observations of the previous sample. Plots at the top of Figure 4 show, for 3 selected National Stock Indices, the new observed log
returns {yji }422
i=416 together with the mean and the 2.5% and 97.5% quantiles of their marginal and
conditional distributions. We use standard formulas of the multivariate normal distribution based
422
on the posterior mean of the updated {?(ti )}422
i=416 and {?(ti )}i=416 after 5,000 Gibbs iterations
with a burn in of 500.We can clearly notice the good performance of our proposed online updating algorithm in obtaining a characterization for the distribution of new observations. Also note
that the multivariate approach together with a flexible model for the mean and covariance, allow
for significant improvements when the conditional distribution of an index given the others is analyzed. To obtain further informations about the predictive performance of our LBCR, we can easily
use our online updating algorithm to obtain h step-ahead predictions for ?(tT +h|T ) and ?(tT +h|T )
with h = 1, ..., H. In particular, referring to Durbin and Koopman [25], we can generate posterior
7
FRANCE CAC40
0.10
INDIA BSE30
0.00
-0.05
-0.05
-0.05
0.00
0.00
0.05
0.05
0.05
0.10
USA NASDAQ
2012-07-16
2012-07-30
2012-08-13
2012-07-02
2012-07-16
2012-07-30
2012-08-13
2012-08-13
2012-07-30
2012-08-13
0.00 0.02 0.04 0.06
-0.04
0.00 0.02 0.04 0.06
2012-07-30
2012-07-16
Time
-0.04
2012-07-16
(b)
-0.08
-0.04
-0.08
(a)
2012-07-02
2012-07-02
Time
0.00 0.02 0.04 0.06
Time
2012-07-02
2012-07-16
2012-07-30
2012-08-13
(c)
-0.08
2012-07-02
2012-07-02
2012-07-16
2012-07-30
2012-08-13
Figure 4: Top: For 3 selected NSI, plot of the observed log returns (black) together with the mean
and the 2.5% and 97.5% quantiles of the marginal distribution (red) and conditional distribution
given the other 32 NSI (green) yji |yi?j with yi?j = {yqi , q 6= j}, based on the posterior mean of
422
{?(ti )}422
i=416 and {?(ti )}i=416 from the online updating procedure for the new observations from
02/07/2012 to 13/08/2012. Bottom: boxplots of the one step ahead prediction errors for the 33
NSI computed with 3 different methods.
+H
samples from ?(tT +h|T ) and ?(tT +h|T ) for h = 1, ..., H merely by treating {yi }Ti=T
+1 as missing
values in the proposed online updating algorithm. Here, we consider the one step ahead prediction
(i.e. H = 1) problem for the new observations. More specifically, for each i from 415 to 421, we
update the mean and covariance functions conditioning on informations up to ti through the online
algorithm and then obtain the predicted posterior distribution for ?(ti+1|i ) and ?(ti+1|i ) by adding
to the sample considered for the online updating a last column yi+1 of missing values. Plots at the
bottom of Figure 4, show the boxplots of the one step ahead prediction errors for the 33 NSI obtained as the difference between the predicted value y?j,i+1|i and, once available, the observed log
return yj,i+1 with i + 1 = 416, ..., 422 corresponding to weeks from 02/07/2012 to 13/08/2012.
In (a) we forecast the future log returns with the unconditional mean {?
yi+1 }421
i=415 = 0, which is
what is often done in practice under the general assumption of zero mean, stationary log returns. In
(b) we consider y?i+1|i = ?
?(ti+1|i ), the posterior mean of the one step ahead predictive distribution
of ?(ti+1|i ), obtained from the previous proposed approach after 5,000 Gibbs iterations with a burn
in of 500. Finally in (c) we suppose that the log returns of all National Stock Indices except that of
? i+1|i ))
country j (i.e. yj,i+1 ) become available at ti+1 and, considering yi+1|i ? Np (?
?(ti+1|i ), ?(t
?
with ?
?(ti+1|i ) and ?(ti+1|i ) posterior means of the one step ahead predictive distribution respectively for ?(ti+1|i ) and ?(ti+1|i ), we forecast y?j,i+1 with the conditional mean of yj,i+1 given the
other log returns at time ti+1 . Prediction with unconditional mean (a) seems to lead to over-predicted
values while our approach (b) provides median-unbiased predictions. Moreover, the combination of
our approach and the use of conditional distributions of one return given the others (c) further improves forecasts reducing also the variability of the predictive distribution. We additionally obtain
well calibrated predictive intervals unlike competing methods.
4
Discussion
In this paper, we have presented a generalization of Bayesian nonparametric covariance regression
to obtain a better characterization for mean and covariance temporal dynamics. Maintaining simple
conjugate posterior updates and tractable computations in moderately large p settings, our model
increases the flexibility of previous approaches as shown in the simulation studies. Beside these
key advantages, the state space formulation enables development of a fast online updating algorithm
useful for high frequency data. The application to the problem of capturing temporal and geoeconomic structure between financial markets shows the utility of our approach in the analysis of
multivariate financial time series.
8
References
[1] Tsay, R.S. (2005). Analysis of Financial Time Series. Hoboken, New Jersey: Wiley.
[2] Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82:35-45.
[3] Rasmussen, C.E. & Williams, C.K.I (2006). Gaussian processes for machine learning. Boston: MIT Press.
[4] Huang, J.Z., Wu, C.O & Zhou, L. (2002). Varying-coefficient models and basis function approximations
for the analysis of repeated measurements. Biometrika 89:111-128.
[5] Hastie, T. J. & Tibshirani, R. J. (1990). Generalized Additive Models. London: Chapman and Hall.
[6] Wu C.O., Chiang C.T. & Hoover D.R. (1998). Asymptotic confidence regions for kernel smoothing of a
varying-coefficient model with longitudinal data. JASA 93:1388-1402.
[7] Friedman, J. H. (1991). Multivariate Adaptive Regression Splines. Annals of Statistics 19:1-67.
[8] Smith, M. & Kohn, R. (1996). Nonparametric regression using Bayesian variable selection. Journal of
Econometrics 75:317-343.
[9] George, E.I. & McCulloch, R.E. (1993). Variable selection via Gibbs sampling. JASA 88:881-889.
[10] Donoho, D.L. & Johnstone, I.M. (1995). Adapting to unknown smoothness via wavelet shrinkage. JASA
90:1200-1224.
[11] Fan, J. & Gijbels, I. (1995). Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation. JRSS. Series B 57:371-394.
[12] Wolpert, R.L., Clyde M.A. & Tu, C. (2011). Stochastic expansions using continuous dictionaries: Levy
adaptive regression kernels. Annals of Statistics 39:1916-1962.
[13] Bollerslev, T., Engle, R.F. and Wooldrige, J.M. (1988). A capital-asset pricing model with time-varying
covariances. Journal of Political Economy 96:116-131.
[14] Engle, R.F. & Kroner, K.F. (1995). Multivariate simultaneous generalized ARCH. Econometric Theory
11:122-150.
[15] Engle, R.F. (2002). Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics 20:339-350.
[16] Burns, P. (2005). Multivariate GARCH with Only Univariate Estimation. http://www.burns-stat.com.
[17] Alexander, C.O. (2001). Orthogonal GARCH. Mastering Risk 2:21-38.
[18] van der Weide, R. (2002). GO-GARCH: a multivariate generalized orthogonal GARCH model. Journal
of Applied Econometrics 17:549-564.
[19] Nakajima, J. & West, M. (2012). Dynamic factor volatility modeling: A Bayesian latent threshold approach. Journal of Financial Econometrics, in press.
[20] Wilson, A.G. & Ghahramani Z. (2010). Generalised Wishart Processes. arXiv:1101.0240.
[21] Bru, M. (1991). Wishart Processes. Journal of Theoretical Probability 4:725-751.
[22] Fox E. & Dunson D.B. (2011). Bayesian Nonparametric Covariance Regression. arXiv:1101.2017.
[23] Zhu B. & Dunson D.B., (2012). Locally Adaptive Bayes Nonparametric Regression via Nested Gaussian
Processes. arXiv:1201.4403.
[24] Durbin, J. & Koopman, S. (2002). A simple and efficient simulation smoother for state space time series
analysis. Biometrika 89:603-616.
[25] Durbin, J. & Koopman, S. (2001). Time Series Analysis by State Space Methods. New York: Oxford
University Press Inc.
[26] Donoho, D.L. & Johnstone, J.M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81:425455.
[27] Gelman, A. & Rubin, D.B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science 7:457-511.
9
| 5115 |@word middle:1 version:1 inversion:2 polynomial:3 loading:2 seems:2 underline:1 confirms:2 simulation:13 covariance:29 dramatic:1 solid:4 accommodate:1 reduction:4 initial:2 series:19 united:1 denoting:1 longitudinal:1 reaction:1 com:2 worsening:1 hoboken:1 additive:1 enables:1 plot:8 treating:1 update:7 stationary:3 half:1 selected:3 beginning:3 ith:1 smith:1 short:1 chiang:1 padua:4 institution:1 provides:4 characterization:3 location:2 along:1 constructed:1 burst:1 differential:3 become:1 ik:3 prove:1 consists:1 fitting:2 manner:1 x0:2 market:6 expected:3 rapid:6 behavior:4 growing:1 considering:5 becomes:1 distri:1 provided:2 underlying:3 moreover:4 spain:1 mcculloch:1 what:2 crisis:15 substantially:1 minimizes:1 proposing:1 temporal:2 quantitative:1 every:1 ti:87 nation:1 barack:1 finance:1 weekly:2 exactly:1 biometrika:3 generalised:2 positive:2 engineering:1 local:7 tends:1 analyzing:1 oxford:1 path:1 solely:1 black:4 burn:8 dynamically:1 challenging:1 heteroscedastic:1 co:1 range:1 adoption:1 practical:2 yj:3 practice:4 block:1 definite:2 implement:1 procedure:2 significantly:2 adapting:1 persistence:1 confidence:1 induce:2 performace:1 close:3 selection:6 gelman:2 risk:2 instability:1 www:1 missing:5 go:3 attention:2 flexibly:1 williams:1 splitting:1 immediately:2 yqi:1 utilizing:1 financial:21 stability:1 handle:1 proving:1 variation:1 limiting:1 updated:1 annals:2 ngp:5 suppose:1 duke:2 unipd:2 agreement:1 element:9 trend:1 approximated:2 updating:12 econometrics:4 cesare:2 observed:5 bottom:4 capture:4 region:1 cycle:1 eu:1 highest:2 substantial:1 moderately:3 dynamic:9 heteroscedasticity:1 predictive:6 basis:2 easily:2 po:1 differently:1 stock:7 various:1 jersey:1 fast:1 london:1 detected:1 choosing:2 whose:1 lag:1 widely:2 larger:1 quite:1 drawing:1 otherwise:1 cov:1 statistic:3 gp:8 online:14 housing:1 advantage:2 sequence:2 propose:4 adaptation:2 j2:1 tu:1 rapidly:1 mixing:4 flexibility:5 scalability:1 bollerslev:1 exploiting:1 convergence:2 generating:1 volatility:8 depending:2 develop:2 fixing:1 stat:3 implemented:1 predicted:3 exhibiting:3 greek:2 correct:1 modifying:1 stochastic:9 centered:1 require:2 government:1 fix:1 generalization:4 hoover:1 extension:1 hold:2 around:1 considered:3 sufficiently:1 credit:1 exp:1 normal:1 hall:1 mapping:1 predict:1 week:1 bump:1 major:3 dictionary:14 vary:3 early:1 estimation:5 bond:1 extremal:1 southeast:1 tool:1 mit:1 clearly:4 gaussian:10 aim:2 zhou:1 shrinkage:7 varying:22 wilson:2 focus:3 improvement:1 political:1 detect:1 inference:5 economy:1 nasdaq:7 reproduce:1 france:1 interested:1 issue:2 overall:1 flexible:7 among:1 reform:1 yahoo:1 development:1 smoothing:13 spatial:2 initialize:2 marginal:2 field:1 once:4 equal:4 having:4 sampling:4 irish:1 chapman:1 represents:3 mimic:1 future:2 others:3 spline:7 np:3 fundamentally:1 few:1 gamma:1 national:6 asian:2 maintain:1 recalling:2 friedman:1 interest:2 regulating:2 highly:1 possibility:2 analyzed:3 pc:5 unconditional:2 chain:3 accurate:2 mib:1 orthogonal:2 fox:5 initialized:1 theoretical:1 column:3 modeling:5 markovian:1 tractability:2 cost:1 geo:1 entry:2 predictor:1 too:1 characterize:1 answer:1 calibrated:1 referring:2 clyde:1 density:2 international:1 peak:2 krone:1 together:6 synthesis:1 quickly:1 squared:4 again:1 containing:1 choose:2 slowly:1 russia:1 huang:1 wishart:5 derivative:3 leading:2 return:9 koopman:4 potential:3 suggesting:1 includes:1 coefficient:3 inc:1 explicitly:3 vi:2 piece:1 performed:2 apparently:1 red:4 bution:1 bayes:1 inherited:1 contribution:1 ass:1 minimize:1 variance:8 bayesian:12 accurately:2 knot:4 monitoring:1 trajectory:4 asset:2 simultaneous:1 reach:1 definition:1 competitor:2 against:1 frequency:2 associated:1 sampled:1 dataset:6 richly:2 proved:1 recall:1 dimensionality:1 improves:1 greece:2 carefully:1 thinning:1 higher:3 dcc:5 follow:2 asia:1 improved:1 formulation:10 done:1 mar:1 just:1 arch:1 correlation:12 replacing:1 lack:2 defines:2 quality:1 pricing:1 usa:7 effect:1 requiring:1 true:10 unbiased:1 evolution:2 hence:1 alternating:1 q0:4 illustrated:1 conditionally:1 during:9 covering:1 daniele:1 generalized:4 evident:2 tt:6 wise:1 instantaneous:3 novel:1 volatile:1 common:1 superior:1 conditioning:2 influenza:2 jl:1 relating:1 marginals:1 significant:1 measurement:1 gibbs:9 smoothness:17 credibility:1 grid:1 portugal:1 bruno:1 portfolio:1 europe:1 heteroskedasticity:1 multivariate:22 posterior:29 showed:1 recent:1 italy:3 driven:1 scenario:1 certain:1 yi:22 der:2 garch:21 additional:1 george:1 period:4 semi:1 smoother:4 hpd:4 currency:1 multiple:1 reduces:2 smooth:9 technical:1 cross:1 long:2 impact:1 prediction:10 involving:1 regression:14 basic:1 arxiv:3 iteration:5 kernel:4 represent:4 nakajima:1 irregular:1 proposal:1 background:1 addition:1 want:1 interval:8 median:5 country:5 crucial:1 launched:1 unlike:1 south:1 induced:2 tend:2 facilitates:1 presence:1 ideal:1 enough:1 affect:1 independence:1 hastie:1 bandwidth:7 identified:1 competing:1 reduce:1 economic:5 shift:1 engle:4 bottleneck:1 whether:1 tsay:1 kohn:1 utility:1 alk:7 effort:2 suffer:1 york:1 cause:1 jj:1 useful:1 clear:2 nonparametric:8 bcr:17 locally:15 band:2 induces:1 concentrated:2 http:2 generate:1 problematic:4 notice:4 dotted:4 estimated:5 arising:1 tibshirani:1 discrete:1 hyperparameter:1 bekk:2 key:3 threshold:1 capital:1 changing:1 pj:2 boxplots:2 econometric:1 merely:1 gijbels:1 run:3 inverse:1 parameterized:2 uncertainty:1 ftse:1 extends:1 arrive:1 wu:2 separation:1 draw:1 scaling:1 dvec:2 capturing:1 followed:1 summer:1 correspondence:4 fan:1 quadratic:1 durbin:4 badly:1 adapted:1 insufficiently:1 occur:1 ahead:6 diffuse:2 aspect:1 gwp:1 extremely:1 durante:2 department:3 developing:1 according:1 alternate:1 combination:3 miscalibration:1 conjugate:3 across:3 jr:1 ewma:7 mastering:1 happens:1 outbreak:1 computationally:3 equation:6 mechanism:1 irregularly:1 tractable:1 end:3 scarpa:2 available:4 apply:1 appropriate:1 save:1 alternative:1 robustness:1 top:3 standardized:3 include:2 maintaining:2 ghahramani:2 quantile:1 prof:1 implied:1 quantity:2 occurs:1 strategy:1 parametric:1 dependence:7 usual:2 diagonal:1 rerunning:1 distance:1 simulated:6 accommodates:2 accommodating:1 collected:1 extent:1 assuming:4 kalman:2 length:3 index:8 modeled:2 mini:2 innovation:2 nc:1 dunson:9 trace:1 implementation:1 proper:2 unknown:4 perform:2 allowing:5 upper:1 observation:17 discarded:1 variability:1 sharp:6 rating:1 david:1 required:2 speculation:1 narrow:1 address:1 able:3 suggested:1 pattern:1 regime:1 green:3 max:2 erratic:1 debt:3 weide:2 event:1 difficulty:1 widening:1 regularized:1 examination:1 business:1 residual:1 zhu:2 representing:1 misleading:1 numerous:1 contagion:1 lk:21 bubble:1 prior:17 literature:1 evolve:2 marginalizing:2 asymptotic:1 beside:1 fully:1 highlight:1 adaptivity:4 limitation:1 filtering:2 allocation:1 var:1 validation:1 downloaded:1 jasa:3 rubin:2 systematically:1 bypass:1 uncorrelated:1 nsi:7 summary:2 accounted:1 last:4 free:1 truncation:2 soon:1 rasmussen:1 allow:5 johnstone:3 wide:1 fall:1 characterizing:1 india:2 sparse:2 van:2 regard:1 world:3 rich:1 autoregressive:2 collection:2 adaptive:8 far:1 approximate:1 implicitly:1 global:3 assumed:3 yji:2 continuous:10 latent:11 iterative:1 table:2 additionally:2 obtaining:2 improving:1 expansion:1 mse:2 investigated:2 complex:1 european:4 domain:1 inappropriately:1 diag:1 inherit:1 obama:1 main:1 linearly:1 motivation:1 whole:1 hyperparameters:2 n2:1 repeated:1 west:1 join:1 quantiles:5 slow:3 wiley:1 exponential:2 levy:1 late:2 wavelet:3 formula:1 specific:1 showing:2 maxi:2 closeness:1 evidence:1 bru:2 intractable:1 stepwise:1 restricting:2 burden:1 adding:2 importance:1 conditioned:1 occurring:1 budget:2 nk:3 forecast:3 durham:1 rejection:2 boston:1 smoothly:1 wolpert:1 likely:1 univariate:1 terior:1 nested:3 truth:1 conditional:13 goal:3 donoho:3 change:16 tiger:4 typical:1 specifically:7 except:1 reducing:1 sampler:1 principal:1 east:1 select:1 support:1 latter:1 inability:1 assessed:1 alexander:1 incorporate:1 evaluate:1 mcmc:4 |
4,549 | 5,116 | A Latent Source Model for
Nonparametric Time Series Classification
George H. Chen
MIT
[email protected]
Stanislav Nikolov
Twitter
[email protected]
Devavrat Shah
MIT
[email protected]
Abstract
For classifying time series, a nearest-neighbor approach is widely used in practice
with performance often competitive with or better than more elaborate methods
such as neural networks, decision trees, and support vector machines. We develop
theoretical justification for the effectiveness of nearest-neighbor-like classification of time series. Our guiding hypothesis is that in many applications, such as
forecasting which topics will become trends on Twitter, there aren?t actually that
many prototypical time series to begin with, relative to the number of time series
we have access to, e.g., topics become trends on Twitter only in a few distinct manners whereas we can collect massive amounts of Twitter data. To operationalize
this hypothesis, we propose a latent source model for time series, which naturally
leads to a ?weighted majority voting? classification rule that can be approximated
by a nearest-neighbor classifier. We establish nonasymptotic performance guarantees of both weighted majority voting and nearest-neighbor classification under
our model accounting for how much of the time series we observe and the model
complexity. Experimental results on synthetic data show weighted majority voting
achieving the same misclassification rate as nearest-neighbor classification while
observing less of the time series. We then use weighted majority to forecast which
news topics on Twitter become trends, where we are able to detect such ?trending
topics? in advance of Twitter 79% of the time, with a mean early advantage of 1
hour and 26 minutes, a true positive rate of 95%, and a false positive rate of 4%.
1
Introduction
Recent years have seen an explosion in the availability of time series data related to virtually every
human endeavor ? data that demands to be analyzed and turned into valuable insights. A key
recurring task in mining this data is being able to classify a time series. As a running example used
throughout this paper, consider a time series that tracks how much activity there is for a particular
news topic on Twitter. Given this time series up to present time, we ask ?will this news topic go
viral?? Borrowing Twitter?s terminology, we label the time series a ?trend? and call its corresponding
news topic a trending topic if the news topic goes viral; otherwise, the time series has label ?not
trend?. We seek to forecast whether a news topic will become a trend before it is declared a trend (or
not) by Twitter, amounting to a binary classification problem. Importantly, we skirt the discussion
of what makes a topic considered trending as this is irrelevant to our mathematical development.1
Furthermore, we remark that handling the case where a single time series can have different labels
at different times is beyond the scope of this paper.
1
While it is not public knowledge how Twitter defines a topic to be a trending topic, Twitter does provide
information for which topics are trending topics. We take these labels to be ground truth, effectively treating
how a topic goes viral to be a black box supplied by Twitter.
1
Numerous standard classification methods have been tailored to classify time series, yet a simple
nearest-neighbor approach is hard to beat in terms of classification performance on a variety of
datasets [20], with results competitive to or better than various other more elaborate methods such
as neural networks [15], decision trees [16], and support vector machines [19]. More recently,
researchers have examined which distance to use with nearest-neighbor classification [2, 7, 18] or
how to boost classification performance by applying different transformations to the time series
before using nearest-neighbor classification [1]. These existing results are mostly experimental,
lacking theoretical justification for both when nearest-neighbor-like time series classifiers should be
expected to perform well and how well.
If we don?t confine ourselves to classifying time series, then as the amount of data tends to infinity,
nearest-neighbor classification has been shown to achieve a probability of error that is at worst
twice the Bayes error rate, and when considering the nearest k neighbors with k allowed to grow
with the amount of data, then the error rate approaches the Bayes error rate [5]. However, rather
than examining the asymptotic case where the amount of data goes to infinity, we instead pursue
nonasymptotic performance guarantees in terms of how large of a training dataset we have and how
much we observe of the time series to be classified. To arrive at these nonasymptotic guarantees, we
impose a low-complexity structure on time series.
Our contributions. We present a model for which nearest-neighbor-like classification performs well
by operationalizing the following hypothesis: In many time series applications, there are only a small
number of prototypical time series relative to the number of time series we can collect. For example,
posts on Twitter are generated by humans, who are often behaviorally predictable in aggregate. This
suggests that topics they post about only become trends on Twitter in a few distinct manners, yet we
have at our disposal enormous volumes of Twitter data. In this context, we present a novel latent
source model: time series are generated from a small collection of m unknown latent sources, each
having one of two labels, say ?trend? or ?not trend?. Our model?s maximum a posteriori (MAP) time
series classifier can be approximated by weighted majority voting, which compares the time series
to be classified with each of the time series in the labeled training data. Each training time series
casts a weighted vote in favor of its ground truth label, with the weight depending on how similar
the time series being classified is to the training example. The final classification is ?trend? or ?not
trend? depending on which label has the higher overall vote. The voting is nonparametric in that it
does not learn parameters for a model and is driven entirely by the training data. The unknown latent
sources are never estimated; the training data serve as a proxy for these latent sources. Weighted
majority voting itself can be approximated by a nearest-neighbor classifier, which we also analyze.
Under our model, we show sufficient conditions so that if we have n = ?(m log m ) time series in
our training data, then weighted majority voting and nearest-neighbor classification correctly classify a new time series with probability at least 1
after observing its first ?(log m ) time steps. As
our analysis accounts for how much of the time series we observe, our results readily apply to the
?online? setting in which a time series is to be classified while it streams in (as is the case for forecasting trending topics) as well as the ?offline? setting where we have access to the entire time series.
Also, while our analysis yields matching error upper bounds for the two classifiers, experimental results on synthetic data suggests that weighted majority voting outperforms nearest-neighbor classification early on when we observe very little of the time series to be classified. Meanwhile, a specific
instantiation of our model leads to a spherical Gaussian mixture model, where the latent sources are
Gaussian mixture components. We show that existing performance guarantees on learning spherical
Gaussian mixture models [6, 10, 17] require more stringent conditions than what our results need,
suggesting that learning the latent sources is overkill if the goal is classification.
Lastly, we apply weighted majority voting to forecasting trending topics on Twitter. We emphasize
that our goal is precognition of trends: predicting whether a topic is going to be a trend before it
is actually declared to be a trend by Twitter or, in theory, any other third party that we can collect
ground truth labels from. Existing work that identify trends on Twitter [3, 4, 13] instead, as part
of their trend detection, define models for what trends are, which we do not do, nor do we assume
we have access to such definitions. (The same could be said of previous work on novel document
detection on Twitter [11, 12].) In our experiments, weighted majority voting is able to predict
whether a topic will be a trend in advance of Twitter 79% of the time, with a mean early advantage
of 1 hour and 26 minutes, a true positive rate of 95%, and a false positive rate of 4%. We empirically
find that the Twitter activity of a news topic that becomes a trend tends to follow one of a finite
number of patterns, which could be thought of as latent sources.
2
Outline. Weighted majority voting and nearest-neighbor classification for time series are presented in Section 2. We provide our latent source model and theoretical performance guarantees
of weighted majority voting and nearest-neighbor classification under this model in Section 3. Experimental results for synthetic data and forecasting trending topics on Twitter are in Section 4.
2
Weighted Majority Voting and Nearest-Neighbor Classification
Given a time-series2 s : Z ! R, we want to classify it as having either label +1 (?trend?) or 1
(?not trend?). To do so, we have access to labeled training data R+ and R , which denote the sets
of all training time series with labels +1 and 1 respectively.
Weighted majority voting. Each positively-labeled example r 2 R+ casts a weighted vote
(T )
e d (r,s) for whether time series s has label +1, where d(T ) (r, s) is some measure of similarity between the two time series r and s, superscript (T ) indicates that we are only allowed to look
at the first T time steps (i.e., time steps 1, 2, . . . , T ) of s (but we?re allowed to look outside of these
time steps for the training time series r), and constant
0 is a scaling parameter that determines
the ?sphere of influence? of each example. Similarly, each negatively-labeled example in R also
casts a weighted vote for whether time series s has label 1.
The similarity measure d(T ) (r, s) could, for example, be squared Euclidean distance: d(T ) (r, s) =
PT
s(t))2 , kr sk2T . However, this similarity measure only looks at the first T time
t=1 (r(t)
steps of training time series r. Since time series in our training data are known, we need not restrict
our attention to their first T time steps. Thus, we use the following similarity measure:
d(T ) (r, s) =
2{
min
max ,...,0,...,
max }
T
X
(r(t + )
t=1
s(t))2 =
2{
min
max ,...,0,...,
max }
kr ?
sk2T ,
(1)
where we minimize over integer time shifts with a pre-specified maximum allowed shift max 0.
Here, we have used q? to denote time series q advanced by time steps, i.e., (q? )(t) = q(t+ ).
Finally, we sum up all of the weighted +1 votes and then all of the weighted 1 votes. The label
with the majority of overall weighted votes is declared as the label for s:
(
P
P
d(T ) (r,s)
d(T ) (r,s)
,
(T
)
r2R e
b (s; ) = +1 if r2R+ e
L
(2)
1 otherwise.
Using a larger time window size T corresponds to waiting longer before we make a prediction.
We need to trade off how long we wait and how accurate we want our prediction. Note that knearest-neighbor classification corresponds to only considering the k nearest neighbors of s among
all training time series; all other votes are set to 0. With k = 1, we obtain the following classifier:
Nearest-neighbor classifier. Let rb = arg minr2R+ [R d(T ) (r, s) be the nearest neighbor of s.
Then we declare the label for s to be:
?
+1 if rb 2 R+ ,
(T )
b
LN N (s) =
(3)
1 if rb 2 R .
3
A Latent Source Model and Theoretical Guarantees
We assume there to be m unknown latent sources (time series) that generate observed time series.
Let V denote the set of all such latent sources; each latent source v : Z ! R in V has a true label
+1 or 1. Let V+ ? V be the set of latent sources with label +1, and V ? V be the set of those
with label 1. The observed time series are generated from latent sources as follows:
2
1. Sample latent source V from V uniformly at random.3 Let L 2 {?1} be the label of V .
We index time using Z for notationally convenience but will assume time series to start at time step 1.
While we keep the sampling uniform for clarity of presentation, our theoretical guarantees can easily be
extended to the case where the sampling is not uniform. The only change is that the number of training data
needed will be larger by a factor of m?1min , where ?min is the smallest probability of a particular latent source
occurring.
3
3
activity
+1
{1
+1
{1
+1
{1
time
Figure 1: Example of latent sources superimposed, where each latent source is shifted vertically in
amplitude such that every other latent source has label +1 and the rest have label 1.
2. Sample integer time shift uniformly from {0, 1, . . . , max }.
3. Output time series S : Z ! R to be latent source V advanced by time steps, followed
by adding noise signal E : Z ! R, i.e., S(t) = V (t + ) + E(t). The label associated
with the generated time series S is the same as that of V , i.e., L. Entries of noise E are
i.i.d. zero-mean sub-Gaussian with parameter , which means that for any time index t,
?1
?
2 2
E[exp( E(t))] ? exp
for all 2 R.
(4)
2
The family of sub-Gaussian distributions includes a variety of distributions, such as a zeromean Gaussian with standard deviation and a uniform distribution over [ , ].
The above generative process defines our latent source model. Importantly, we make no assumptions
about the structure of the latent sources. For instance, the latent sources could be tiled as shown in
Figure 1, where they are evenly separated vertically and alternate between the two different classes
+1 and 1. With a parametric model like a k-component Gaussian mixture model, estimating
these latent sources could be problematic. For example, if we take any two adjacent latent sources
with label +1 and cluster them, then this cluster could be confused with the latent source having
label 1 that is sandwiched in between. Noise only complicates estimating the latent sources. In
this example, the k-component Gaussian mixture model needed for label +1 would require k to be
the exact number of latent sources with label +1, which is unknown. In general, the number of
samples we need from a Gaussian mixture mixture model to estimate the mixture component means
is exponential in the number of mixture components [14]. As we discuss next, for classification,
we sidestep learning the latent sources altogether, instead using training data as a proxy for latent
sources. At the end of this section, we compare our sample complexity for classification versus
some existing sample complexities for learning Gaussian mixture models.
Classification. If we knew the latent sources and if noise entries E(t) were i.i.d. N (0, 21 ) across t,
then the maximum a posteriori (MAP) estimate for label L given an observed time series S = s is
(
(T )
+1 if ?MAP (s; ) 1,
(T )
b
LMAP (s; ) =
(5)
1 otherwise,
where
(T )
?MAP (s;
and D+ , {0, . . . ,
max }.
P
), P
v+ 2V+
v 2V
P
P
+ 2D+
2D+
exp
exp
kv+ ?
kv ?
+
sk2T
sk2T
,
(6)
However, we do not know the latent sources, nor do we know if the noise is i.i.d. Gaussian. We
assume that we have access to training data as given in Section 2. We make a further assumption
that the training data were sampled from the latent source model and that we have n different training
time series. Denote D , {
max , . . . , 0, . . . , max }. Then we approximate the MAP classifier by
using training data as a proxy for the latent sources. Specifically, we take ratio (6), replace the inner
sum by a minimum in the exponent, replace V+ and V by R+ and R , and replace D+ by D to
obtain the ratio:
P
min + 2D kr+ ? + sk2T
r 2R exp
(T )
? (s; ) , P + +
.
(7)
min 2D kr ?
sk2T
r 2R exp
4
(T )
Plugging ?(T ) in place of ?MAP in classification rule (5) yields the weighted majority voting rule (2).
Note that weighted majority voting could be interpreted as a smoothed nearest-neighbor approximation whereby we only consider the time-shifted version of each example time series that is closest
to the observed time series s. If we didn?t replace the summations over time shifts with minimums
in the exponent, then we have a kernel density estimate in the numerator and in the denominator
[9, Chapter 7] (where the kernel is Gaussian) and our main theoretical result for weighted majority
voting to follow would still hold using the same proof.4
Lastly, applications may call for trading off true and false positive rates. We can do this by generalizing decision rule (5) to declare the label of s to be +1 if ?(T ) (s, ) ? and vary parameter ? > 0.
The resulting decision rule, which we refer to as generalized weighted majority voting, is thus:
?
+1 if ?(T ) (s, ) ?,
(T )
b
L? (s; ) =
(8)
1 otherwise,
where setting ? = 1 recovers the usual weighted majority voting (2). This modification to the
classifier can be thought of as adjusting the priors on the relative sizes of the two classes. Our
theoretical results to follow actually cover this more general case rather than only that of ? = 1.
Theoretical guarantees. We now present the main theoretical results of this paper which identify
sufficient conditions under which generalized weighted majority voting (8) and nearest-neighbor
classification (3) can classify a time series correctly with high probability, accounting for the size of
the training dataset and how much we observe of the time series to be classified. First, we define the
?gap? between R+ and R restricted to time length T and with maximum time shift max as:
G(T ) (R+ , R ,
max )
,
min
r+ 2R+ ,r 2R ,
2D
+,
kr+ ?
+
r ?
k2T .
(9)
This quantity measures how far apart the two different classes are if we only look at length-T chunks
of each time series and allow all shifts of at most max time steps in either direction.
Our first main result is stated below. We defer proofs to the longer version of this paper.
Theorem 1. (Performance guarantee for generalized weighted majority voting) Let m+ = |V+ | be
the number of latent sources with label +1, and m = |V | = m m+ be the number of latent
sources with label 1. For any > 1, under the latent source model with n > m log m time series
in the training data, the probability of misclassifying time series S with label L using generalized
b (T ) (?; ) satisfies the bound
weighted majority voting L
?
(T )
b (S; ) 6= L)
P(L
?
? ?m
m ?
+
?
+
(2
m
?m
max
+ 1)n exp
(
4
2 2
)G(T ) (R+ , R ,
max )
+1
+m
. (10)
An immediate consequence is that given error tolerance 2 (0, 1) and with choice 2 (0, 4 1 2 ),
then upper bound (10) is at most (by having each of the two terms on the right-hand side be ? 2 )
if n > m log 2m (i.e., = 1 + log 2 / log m), and
G(T ) (R+ , R ,
+
log( ?m
m +
m
?m
) + log(2
max +
2 2
1) + log n + log
2
.
(11)
4
This means that if we have access to a large enough pool of labeled time series, i.e., the pool has
?(m log m ) time series, then we can subsample n = ?(m log m ) of them to use as training data.
Then with choice = 8 1 2 , generalized weighted majority voting (8) correctly classifies a new time
series S with probability at least 1
if
? ?
?
? ?m
m ?
m?
+
G(T ) (R+ , R , max ) = ? 2 log
+
+ log(2 max + 1) + log
.
(12)
m
?m
max )
Thus, the gap between sets R+ and R needs to grow logarithmic in the number of latent sources m
in order for weighted majority voting to classify correctly with high probability. Assuming that the
4
We use a minimum rather a summation over time shifts to make the method more similar to existing time
series classification work (e.g., [20]), which minimize over time warpings rather than simple shifts.
5
original unknown latent sources are separated (otherwise, there is no hope to distinguish between
the classes using any classifier) and the gap in the training data grows as G(T ) (R+ , R , max ) =
?( 2 T ) (otherwise, the closest two training time series from opposite classes are within noise of
each other), then observing the first T = ?(log(? + ?1 ) + log(2 max + 1) + log m ) time steps from
the time series is sufficient to classify it correctly with probability at least 1
.
A similar result holds for the nearest-neighbor classifier (3).
Theorem 2. (Performance guarantee for nearest-neighbor classification) For any > 1, under
the latent source model with n > m log m time series in the training data, the probability of
b (T ) (?) satisfies the
misclassifying time series S with label L using the nearest-neighbor classifier L
NN
bound
?
?
1
b (T ) (S) 6= L) ? (2 max + 1)n exp
P(L
G(T ) (R+ , R , max ) + m +1 .
(13)
NN
16 2
Our generalized weighted majority voting bound (10) with ? = 1 (corresponding to regular weighted
majority voting) and = 8 1 2 matches our nearest-neighbor classification bound, suggesting that
the two methods have similar behavior when the gap grows with T . In practice, we find weighted
majority voting to outperform nearest-neighbor classification when T is small, and then as T grows
large, the two methods exhibit similar performance in agreement with our theoretical analysis. For
small T , it could still be fairly likely that the nearest neighbor found has the wrong label, dooming
the nearest-neighbor classifier to failure. Weighted majority voting, on the other hand, can recover
from this situation as there may be enough correctly labeled training time series close by that contribute to a higher overall vote for the correct class. This robustness of weighted majority voting
makes it favorable in the online setting where we want to make a prediction as early as possible.
Sample complexity of learning the latent sources. If we can estimate the latent sources accurately,
then we could plug these estimates in place of the true latent sources in the MAP classifier and
achieve classification performance close to optimal. If we restrict the noise to be Gaussian and
assume max = 0, then the latent source model corresponds to a spherical Gaussian mixture model.
We could learn such a model using Dasgupta and Schulman?s modified EM algorithm [6]. Their
theoretical guarantee depends on the true separation between the closest two latent
p sources, namely
(T )?
0 2
(T )?
2
0
0
G
, minv,v 2V s.t. v6=v kv v k2 , which needs to satisfy G
T . Then with n =
2
?(max{1, G(TT)? }m log m ), G(T )? = ?( 2 log m
),
and
"
?
?
?
?
?
4 2
4 2
T
m
T
T = ? max 1, (T )? 2 log
max 1, (T )? 2
,
(14)
(G
)
(G
)
p
their algorithm achieves, with probability at least 1
, an additive " T error (in Euclidean
distance) close to optimal in estimating every latent source. In contrast, our result is in terms of gap
G(T ) (R+ , R , max ) that depends not on the true separation between two latent sources but instead
on the minimum observed separation in the training data between two time series of opposite labels.
In fact, our gap, in their setting, grows as ?( 2 T ) even when their gap G(T )? grows sublinear in
pT .
m
2
(T )?
2
In particular, while their result cannot handle the regime where O( log ) ? G
?
T,
ours can, using n = ?(m log m ) training time series and observing the first T = ?(log m ) time
steps to classify a time series correctly with probability at least 1
; see the longer version of this
paper for details.
Vempala and Wang [17] have a spectral method for learning Gaussian mixture models that can hane 3 m2 ) training data,
dle smaller G(T )? than Dasgupta and Schulman?s approach but requires n = ?(T
2
where we?ve hidden the dependence on
and other variables of interest for clarity of presentation.
Hsu and Kakade [10] have a moment-based estimator that doesn?t have a gap condition but, under a
different non-degeneracy condition, requires substantially more samples for our problem setup, i.e.,
n = ?((m14 + T m11 )/"2 ) to achieve an " approximation of the mixture components. These results
need substantially more training data than what we?ve shown is sufficient for classification.
To fit a Gaussian mixture model to massive training datasets, in practice, using all the training data
could be prohibitively expensive. In such scenarios, one could instead non-uniformly subsample
O(T m3 /"2 ) time series from the training data using the procedure given in [8] and then feed the
resulting smaller dataset, referred to as an (m, ")-coreset, to the EM algorithm for learning the latent
sources. This procedure still requires more training time series than needed for classification and
lacks a guarantee that the estimated latent sources will be close to the true latent sources.
6
Classification error rate on test data
Classification error rate on test data
0.6
Weighted majority voting
Nearest?neighbor classifier
Oracle MAP classifier
0.5
0.4
0.3
0.2
0.1
0
0
50
100
T
150
200
0.25
Weighted majority voting
Nearest?neighbor classifier
Oracle MAP classifier
0.2
0.15
0.1
0.05
0
1
2
3
4
5
6
7
8
?
(a)
(b)
activity
Figure 2: Results on synthetic data. (a) Classification error rate vs. number of initial time steps T
used; training set size: n = m log m where = 8. (b) Classification error rate at T = 100 vs. .
All experiments were repeated 20 times with newly generated latent sources, training data, and test
data each time. Error bars denote one standard deviation above and below the mean value.
time
Figure 3: How news topics become trends on Twitter. The top left shows some time series of activity
leading up to a news topic becoming trending. These time series superimposed look like clutter, but
we can separate them into different clusters, as shown in the next five plots. Each cluster represents
a ?way? that a news topic becomes trending.
4
Experimental Results
Synthetic data. We generate m = 200 latent sources, where each latent source is constructed by
first sampling i.i.d. N (0, 100) entries per time step and then applying a 1D Gaussian smoothing
filter with scale parameter 30. Half of the latent sources are labeled +1 and the other half 1. Then
n = m log m training time series are sampled as per the latent source model where the noise added
is i.i.d. N (0, 1) and max = 100. We similarly generate 1000 time series to use as test data. We
set = 1/8 for weighted majority voting. For = 8, we compare the classification error rates on
test data for weighted majority voting, nearest-neighbor classification, and the MAP classifier with
oracle access to the true latent sources as shown in Figure 2(a). We see that weighted majority voting
outperforms nearest-neighbor classification but as T grows large, the two methods? performances
converge to that of the MAP classifier. Fixing T = 100, we then compare the classification error
rates of the three methods using varying amounts of training data, as shown in Figure 2(b); the
oracle MAP classifier is also shown but does not actually depend on training data. We see that as
increases, both weighted majority voting and nearest-neighbor classification steadily improve in
performance.
Forecasting trending topics on twitter. We provide only an overview of our Twitter results here,
deferring full details to the longer version of this paper. We sampled 500 examples of trends at
random from a list of June 2012 news trends, and 500 examples of non-trends based on phrases
appearing in user posts during the same month. As we do not know how Twitter chooses what
phrases are considered as candidate phrases for trending topics, it?s unclear what the size of the
7
(a)
(b)
(c)
Figure 4: Results on Twitter data. (a) Weighted majority voting achieves a low error rate (FPR
of 4%, TPR of 95%) and detects trending topics in advance of Twitter 79% of the time, with a mean
of 1.43 hours when it does; parameters: = 10, T = 115, Tsmooth = 80, h = 7. (b) Envelope of
all ROC curves shows the tradeoff between TPR and FPR. (c) Distribution of detection times for
?aggressive? (top), ?conservative? (bottom) and ?in-between? (center) parameter settings.
non-trend category is in comparison to the size of the trend category. Thus, for simplicity, we
intentionally control for the class sizes by setting them equal. In practice, one could still expressly
assemble the training data to have pre-specified class sizes and then tune ? for generalized weighted
majority voting (8). In our experiments, we use the usual weighted majority voting (2) (i.e., ? = 1)
to classify time series, where max is set to the maximum possible (we consider all shifts).
Per topic, we created its time series based on a pre-processed version of the raw rate of how often
the topic was shared, i.e., its Tweet rate. We empirically found that how news topics become trends
tends to follow a finite number of patterns; a few examples of these patterns are shown in Figure 3.
We randomly divided the set of trends and non-trends into into two halves, one to use as training
data and one to use as test data. We applied weighted majority voting, sweeping over , T , and
data pre-processing parameters. As shown in Figure 4(a), one choice of parameters allows us to
detect trending topics in advance of Twitter 79% of the time, and when we do, we detect them an
average of 1.43 hours earlier. Furthermore, we achieve a true positive rate (TPR) of 95% and a false
positive rate (FPR) of 4%. Naturally, there are tradeoffs between TPR, FPR, and how early we make
a prediction (i.e., how small T is). As shown in Figure 4(c), an ?aggressive? parameter setting yields
early detection and high TPR but high FPR, and a ?conservative? parameter setting yields low FPR
but late detection and low TPR. An ?in-between? setting can strike the right balance.
Acknowledgements. This work was supported in part by the Army Research Office under MURI
Award 58153-MA-MUR. GHC was supported by an NDSEG fellowship.
8
References
[1] Anthony Bagnall, Luke Davis, Jon Hills, and Jason Lines. Transformation based ensembles for time
series classification. In Proceedings of the 12th SIAM International Conference on Data Mining, pages
307?319, 2012.
[2] Gustavo E.A.P.A. Batista, Xiaoyue Wang, and Eamonn J. Keogh. A complexity-invariant distance measure for time series. In Proceedings of the 11th SIAM International Conference on Data Mining, pages
699?710, 2011.
[3] Hila Becker, Mor Naaman, and Luis Gravano. Beyond trending topics: Real-world event identification
on Twitter. In Proceedings of the Fifth International Conference on Weblogs and Social Media, 2011.
[4] Mario Cataldi, Luigi Di Caro, and Claudio Schifanella. Emerging topic detection on twitter based on
temporal and social terms evaluation. In Proceedings of the 10th International Workshop on Multimedia
Data Mining, 2010.
[5] Thomas M. Cover and Peter E. Hart. Nearest neighbor pattern classification. IEEE Transactions on
Information Theory, 13(1):21?27, 1967.
[6] Sanjoy Dasgupta and Leonard Schulman. A probabilistic analysis of EM for mixtures of separated,
spherical gaussians. Journal of Machine Learning Research, 8:203?226, 2007.
[7] Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. Querying and mining of time series data: experimental comparison of representations and distance measures. Proceedings
of the VLDB Endowment, 1(2):1542?1552, 2008.
[8] Dan Feldman, Matthew Faulkner, and Andreas Krause. Scalable training of mixture models via coresets.
In Advances in Neural Information Processing Systems 24, 2011.
[9] Keinosuke Fukunaga. Introduction to statistical pattern recognition (2nd ed.). Academic Press Professional, Inc., 1990.
[10] Daniel Hsu and Sham M. Kakade. Learning mixtures of spherical gaussians: Moment methods and
spectral decompositions, 2013. arXiv:1206.5766.
[11] Shiva Prasad Kasiviswanathan, Prem Melville, Arindam Banerjee, and Vikas Sindhwani. Emerging topic
detection using dictionary learning. In Proceedings of the 20th ACM Conference on Information and
Knowledge Management, pages 745?754, 2011.
[12] Shiva Prasad Kasiviswanathan, Huahua Wang, Arindam Banerjee, and Prem Melville. Online l1dictionary learning with application to novel document detection. In Advances in Neural Information
Processing Systems 25, pages 2267?2275, 2012.
[13] Michael Mathioudakis and Nick Koudas. Twittermonitor: trend detection over the Twitter stream. In
Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, 2010.
[14] Ankur Moitra and Gregory Valiant. Settling the polynomial learnability of mixtures of gaussians. In 51st
Annual IEEE Symposium on Foundations of Computer Science, pages 93?102, 2010.
[15] Alex Nanopoulos, Rob Alcock, and Yannis Manolopoulos. Feature-based classification of time-series
data. International Journal of Computer Research, 10, 2001.
[16] Juan J. Rodr??guez and Carlos J. Alonso. Interval and dynamic time warping-based decision trees. In
Proceedings of the 2004 ACM Symposium on Applied Computing, 2004.
[17] Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. Journal of Computer and System Sciences, 68(4):841?860, 2004.
[18] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor
classification. Journal of Machine Learning Research, 10:207?244, 2009.
[19] Yi Wu and Edward Y. Chang. Distance-function design and fusion for sequence data. In Proceedings of
the 2004 ACM International Conference on Information and Knowledge Management, 2004.
[20] Xiaopeng Xi, Eamonn J. Keogh, Christian R. Shelton, Li Wei, and Chotirat Ann Ratanamahatana. Fast
time series classification using numerosity reduction. In Proceedings of the 23rd International Conference
on Machine Learning, 2006.
9
| 5116 |@word version:5 polynomial:1 nd:1 vldb:1 seek:1 prasad:2 accounting:2 decomposition:1 reduction:1 moment:2 initial:1 series:86 daniel:1 document:2 ours:1 batista:1 outperforms:2 existing:5 luigi:1 com:1 yet:2 guez:1 readily:1 luis:1 additive:1 christian:1 treating:1 plot:1 v:2 generative:1 half:3 fpr:6 contribute:1 kasiviswanathan:2 five:1 mathematical:1 constructed:1 become:7 symposium:2 dan:1 manner:2 expected:1 behavior:1 nor:2 detects:1 spherical:5 little:1 window:1 considering:2 becomes:2 begin:1 estimating:3 confused:1 classifies:1 didn:1 medium:1 what:6 interpreted:1 pursue:1 substantially:2 emerging:2 transformation:2 guarantee:12 temporal:1 every:3 voting:39 prohibitively:1 classifier:21 wrong:1 k2:1 control:1 grant:1 positive:7 before:4 declare:2 vertically:2 tends:3 consequence:1 becoming:1 black:1 twice:1 examined:1 k2t:1 collect:3 suggests:2 luke:1 ankur:1 lmap:1 practice:4 minv:1 mathioudakis:1 procedure:2 thought:2 matching:1 pre:4 regular:1 wait:1 convenience:1 close:4 cannot:1 context:1 applying:2 influence:1 map:12 center:1 go:4 attention:1 simplicity:1 coreset:1 m2:1 rule:5 insight:1 estimator:1 importantly:2 handle:1 justification:2 pt:2 massive:2 exact:1 user:1 hypothesis:3 agreement:1 trend:32 approximated:3 expensive:1 recognition:1 muri:1 labeled:7 observed:5 m14:1 bottom:1 ding:1 wang:5 worst:1 news:12 kilian:1 trade:1 numerosity:1 valuable:1 predictable:1 complexity:6 dynamic:1 depend:1 serve:1 negatively:1 easily:1 various:1 chapter:1 separated:3 distinct:2 fast:1 eamonn:3 aggregate:1 outside:1 widely:1 larger:2 say:1 otherwise:6 koudas:1 melville:2 favor:1 knearest:1 itself:1 final:1 online:3 superscript:1 advantage:2 sequence:1 propose:1 turned:1 achieve:4 kv:3 amounting:1 cluster:4 depending:2 develop:1 fixing:1 nearest:37 edward:1 trading:1 direction:1 correct:1 filter:1 human:2 stringent:1 public:1 require:2 summation:2 keogh:3 hold:2 weblogs:1 confine:1 considered:2 ground:3 exp:8 lawrence:1 scope:1 predict:1 matthew:1 vary:1 early:6 smallest:1 achieves:2 dictionary:1 favorable:1 hilum:1 label:34 weighted:45 hope:1 mit:4 behaviorally:1 gaussian:17 modified:1 rather:4 claudio:1 varying:1 office:1 june:1 indicates:1 superimposed:2 contrast:1 detect:3 posteriori:2 twitter:33 nn:2 entire:1 borrowing:1 hidden:1 going:1 arg:1 overall:3 classification:47 among:1 rodr:1 exponent:2 development:1 smoothing:1 fairly:1 equal:1 santosh:1 never:1 having:4 sampling:3 represents:1 look:5 jon:1 few:3 randomly:1 ve:2 ourselves:1 trending:15 detection:9 interest:1 dle:1 mining:5 evaluation:1 analyzed:1 mixture:19 accurate:1 explosion:1 tree:3 euclidean:2 re:1 skirt:1 theoretical:11 complicates:1 instance:1 classify:9 earlier:1 cover:2 phrase:3 deviation:2 entry:3 uniform:3 examining:1 learnability:1 gregory:1 synthetic:5 chooses:1 chunk:1 st:1 density:1 international:8 siam:2 probabilistic:1 off:2 pool:2 michael:1 squared:1 ndseg:1 management:3 moitra:1 juan:1 sidestep:1 leading:1 li:1 aggressive:2 account:1 nonasymptotic:3 suggesting:2 availability:1 includes:1 coresets:1 inc:1 satisfy:1 depends:2 stream:2 jason:1 observing:4 analyze:1 mario:1 competitive:2 bayes:2 start:1 recover:1 keinosuke:1 carlos:1 defer:1 contribution:1 minimize:2 who:1 ensemble:1 yield:4 identify:2 raw:1 identification:1 accurately:1 researcher:1 classified:6 ed:1 definition:1 failure:1 steadily:1 intentionally:1 naturally:2 associated:1 proof:2 recovers:1 di:1 degeneracy:1 sampled:3 hsu:2 dataset:3 adjusting:1 newly:1 ask:1 knowledge:3 amplitude:1 actually:4 operationalizing:1 feed:1 disposal:1 higher:2 follow:4 wei:1 box:1 zeromean:1 furthermore:2 lastly:2 hand:2 banerjee:2 lack:1 defines:2 grows:6 true:10 adjacent:1 numerator:1 during:1 davis:1 whereby:1 generalized:7 hill:1 outline:1 tt:1 performs:1 hane:1 novel:3 recently:1 arindam:2 viral:3 empirically:2 overview:1 volume:1 caro:1 tpr:6 mor:1 refer:1 feldman:1 rd:1 similarly:2 access:7 similarity:4 longer:4 closest:3 recent:1 irrelevant:1 driven:1 apart:1 scenario:1 binary:1 yi:1 seen:1 minimum:4 george:1 impose:1 converge:1 strike:1 nikolov:1 signal:1 full:1 sham:1 match:1 academic:1 plug:1 sphere:1 long:1 divided:1 hart:1 post:3 award:1 plugging:1 prediction:4 scalable:1 denominator:1 metric:1 arxiv:1 kernel:2 tailored:1 whereas:1 want:3 fellowship:1 krause:1 interval:1 grow:2 source:58 envelope:1 rest:1 virtually:1 effectiveness:1 call:2 integer:2 enough:2 faulkner:1 variety:2 shiva:2 fit:1 restrict:2 opposite:2 trajcevski:1 inner:1 andreas:1 tradeoff:2 shift:9 whether:5 becker:1 forecasting:5 peter:2 remark:1 tune:1 amount:5 nonparametric:2 clutter:1 processed:1 category:2 generate:3 supplied:1 outperform:1 problematic:1 misclassifying:2 shifted:2 estimated:2 track:1 correctly:7 rb:3 per:3 dasgupta:3 waiting:1 key:1 scheuermann:1 terminology:1 enormous:1 achieving:1 clarity:2 tweet:1 year:1 sum:2 arrive:1 throughout:1 family:1 place:2 wu:1 separation:3 decision:5 scaling:1 entirely:1 bound:6 followed:1 distinguish:1 assemble:1 oracle:4 activity:5 annual:1 infinity:2 alex:1 declared:3 min:7 fukunaga:1 notationally:1 vempala:2 alternate:1 nanopoulos:1 across:1 smaller:2 em:3 kakade:2 deferring:1 rob:1 modification:1 restricted:1 invariant:1 ln:1 devavrat:2 discus:1 needed:3 know:3 end:1 gaussians:3 apply:2 observe:5 spectral:3 appearing:1 robustness:1 shah:1 altogether:1 professional:1 expressly:1 original:1 thomas:1 top:2 running:1 vikas:1 sigmod:1 establish:1 sandwiched:1 warping:2 added:1 quantity:1 r2r:2 parametric:1 dependence:1 usual:2 said:1 exhibit:1 unclear:1 distance:7 separate:1 majority:40 alonso:1 evenly:1 topic:36 assuming:1 length:2 index:2 ratio:2 balance:1 weinberger:1 setup:1 mostly:1 stated:1 design:1 unknown:5 perform:1 upper:2 m11:1 datasets:2 finite:2 beat:1 immediate:1 situation:1 extended:1 smoothed:1 sweeping:1 cast:3 namely:1 specified:2 huahua:1 nick:1 boost:1 hour:4 able:3 recurring:1 beyond:2 below:2 pattern:5 bar:1 regime:1 max:29 misclassification:1 event:1 settling:1 predicting:1 advanced:2 improve:1 numerous:1 created:1 prior:1 schulman:3 acknowledgement:1 relative:3 stanislav:1 lacking:1 asymptotic:1 sublinear:1 prototypical:2 querying:1 versus:1 foundation:1 sufficient:4 proxy:3 classifying:2 endowment:1 supported:2 offline:1 side:1 allow:1 neighbor:38 mur:1 overkill:1 saul:1 fifth:1 tolerance:1 curve:1 world:1 doesn:1 collection:1 party:1 far:1 social:2 transaction:1 approximate:1 emphasize:1 keep:1 instantiation:1 knew:1 xi:1 don:1 latent:58 ratanamahatana:1 learn:2 manolopoulos:1 meanwhile:1 anthony:1 main:3 noise:8 subsample:2 allowed:4 repeated:1 positively:1 referred:1 roc:1 elaborate:2 sub:2 guiding:1 exponential:1 candidate:1 third:1 late:1 yannis:1 minute:2 theorem:2 operationalize:1 specific:1 list:1 fusion:1 workshop:1 false:4 adding:1 effectively:1 kr:5 gustavo:1 hui:1 valiant:1 occurring:1 demand:1 forecast:2 chen:1 gap:8 margin:1 aren:1 generalizing:1 logarithmic:1 likely:1 army:1 v6:1 sindhwani:1 chang:1 corresponds:3 truth:3 determines:1 satisfies:2 acm:4 ma:1 goal:2 endeavor:1 presentation:2 month:1 leonard:1 ann:1 replace:4 shared:1 hard:1 change:1 specifically:1 uniformly:3 conservative:2 multimedia:1 sanjoy:1 experimental:6 tiled:1 m3:1 vote:9 support:2 prem:2 shelton:1 handling:1 |
4,550 | 5,117 | Multilinear Dynamical Systems
for Tensor Time Series
Mark Rogers
Lei Li
Stuart Russell
EECS Department, University of California, Berkeley
[email protected], {leili,russell}@cs.berkeley.edu
Abstract
Data in the sciences frequently occur as sequences of multidimensional arrays
called tensors. How can hidden, evolving trends in such data be extracted while
preserving the tensor structure? The model that is traditionally used is the linear
dynamical system (LDS) with Gaussian noise, which treats the latent state and
observation at each time slice as a vector. We present the multilinear dynamical
system (MLDS) for modeling tensor time series and an expectation?maximization
(EM) algorithm to estimate the parameters. The MLDS models each tensor observation in the time series as the multilinear projection of the corresponding member
of a sequence of latent tensors. The latent tensors are again evolving with respect
to a multilinear projection. Compared to the LDS with an equal number of parameters, the MLDS achieves higher prediction accuracy and marginal likelihood for
both artificial and real datasets.
1
Introduction
A tenet of mathematical modeling is to faithfully match the structural properties of the data; yet, on
occasion, the available tools are inadequate to perform the task. This scenario is especially common
when the data are tensors, i.e., multidimensional arrays: vector and matrix models are fitted to them
without justification. This is, perhaps, due to the lack of an agreed-upon tensor model. There are
many examples that seem to require such a model: The spatiotemporal grid of atmospheric data in
climate modeling is a time series of n ? m ? l tensors, where n, m and l are the numbers of latitude,
longitude, and elevation grid points. If k measurements?e.g., temperature, humidity, and wind
speed for k=3?are made, then a time series of n ? m ? l ? k tensors is constructed. The daily high,
low, opening, closing, adjusted closing, and volume of the stock prices of n multiple companies
comprise a time series of 6 ? n tensors. A grayscale video sequence is a two-dimensional tensor
time series because each frame is a two-dimensional array of pixels.
Several queries can be made when one is presented with a tensor time series. As with any time
series, a forecast of future data may be requested. For climate data, successful prediction may
spell out whether the overall ocean temperatures will increase. Prediction of stock prices may not
only inform investors but also help to stabilize the economy and prevent market collapse. The
relationships between particular subsets of tensor elements could be of significance. How does the
temperature of the ocean at 8? N, 165? E affect the temperature at 5? S, 125? W? For stock price data,
one may investigate how the stock prices of electric car companies affect those of oil companies.
For a video sequence, one might expect adjacent pixels to be more correlated than those far away
from each other. Another way to describe the relationships among tensor elements is in terms of
their covariances. Equipped with a tabulation of the covariances, one may read off how a given
tensor element affects others. Later in this paper, we will define a tensor time series model and a
covariance tensor that permits the modeling of general noise relationships among tensor elements.
More formally, a tensor X ? RI1 ?????IM is a multidimensional array with elements that can each be
indexed by a vector of positive integers. That is, every element Xi1 ???iM ? R is uniquely addressed
1
by a vector (i1 , ? ? ? , iM ) such that 1 ? im ? Im for all m. Each of the M dimensions of X is called
a mode and represents a particular component of the data. The simplest tensors are vectors and
matrices: vectors are tensors with only a single mode, while matrices are tensors with two modes.
We will consider the tensor time series, which is an ordered, finite collection of tensors that all share
the same dimensionality. In practice, each member of an observed tensor time series reflects the
state of a dynamical system that is measured at discrete epochs.
We propose a novel model for tensor time series: the multilinear dynamical system (MLDS). The
MLDS explicitly incorporates the dynamics, noise, and tensor structure of the data by juxtaposing
concepts in probabilistic graphical models and multilinear algebra. Specifically, the MLDS generalizes the states of the linear dynamical system (LDS) to tensors via a probabilistic variant of the
Tucker decomposition. The LDS tracks latent vector states and observed vector sequences; this
permits forecasting, estimation of latent states, and modeling of noise but only for vector objects.
Meanwhile, the Tucker decomposition of a single tensor computes a latent ?core? tensor but has
no dynamics or noise capabilities. Thus, the MLDS achieves the best of both worlds by uniting
the two models in a common framework. We show that the MLDS, in fact, generalizes LDS and
other well-known vector models to tensors of arbitrary dimensionality. In our experiments on both
synthetic and real data, we demonstrate that the MLDS outperforms the LDS with an equal number
of parameters.
2
Tensor algebra
Let N be the set of all positive integers and R be the set of all real numbers. Given I ? NM ,
where M ? N, we assemble a tensor-product space RI1 ?????IM , which will sometimes be written
as RI = R(I1 ,...,IM ) for shorthand. Then a tensor X ? RI1 ?????IM is an element of a tensor-product
space. A tensor X may be referenced by either a full vector (i1 , . . . , iM ) or a by subvector, using
the ? symbol to indicate coordinates that are not fixed. For example, let X ? RI1 ?I2 ?I3 . Then
Xi1 i2 i3 is a scalar, X?i2 i3 ? RI1 is the vector obtained by setting the second and third coordinates
to i2 and i3 , and X??i3 ? RI1 ?I2 is the matrix obtained by setting the third coordinate to i3 . The
concatenation of two M -dimensional vectors I = (I1 , . . . , IM ) and J = (J1 , . . . , JM ) is given by
IJ = (I1 , . . . , IM , J1 , . . . , JM ), a vector with 2M entries.
Let X ? RI1 ?????IM , M ? N. The vectorization vec(X) ? RI1 ???IM is obtained by shaping the
tensor into a vector. In particular, the elements of vec(X) are given by vec(X)k = Xi1 ???iM , where
PM Qm?1
k = 1 + m=1 n=1 In (im ? 1). For example, if X ? R2?3?2 is given by
1 3 5
7 9 11
X??1 =
, X??2 =
,
2 4 6
8 10 12
T
then vec(X) = (1 2 3 4 5 6 7 8 9 10 11 12) .
Let I, J ? NM , M ? N. The matricization mat(A) ? RI1 ???IM ?J1 ???JM of a tensor A ? RIJ
PM Qm?1
is given by mat(A)kl = Ai1 ???iM j1 ???jM , where k = 1 + m=1 n=1 In (im ? 1) and l = 1 +
PM Qm?1
m=1
n=1 Jn (jm ? 1). The matricization ?flattens? a tensor into a matrix. For example, define
A ? R2?2?2?2 by
1 3
5 7
9 11
13 15
A??11 =
, A??21 =
, A??12 =
, A??22 =
.
2 4
6 8
10 12
14 16
?
?
1 5 9 13
? 2 6 10 14 ?
Then we have mat(A) = ?
.
3 7 11 15 ?
4 8 12 16
The vec and mat operators put tensors in bijective correspondence with vectors and matrices. To
define the inverse of each of these operators, a reference must be made to the dimensionality of the
original tensor. In other words, given X ? RI and A ? RIJ , where I, J ? NM , M ? N, we have
?1
X = vec?1
I (vec(X)) and A = matIJ (mat(A)).
Let I, J ? NM , M ? N. The factorization of a tensor A ? RIJ is given by Ai1 ???iM j1 ???jM =
QM
(m)
(m)
? RIm ?Jm for all m. The factorization exponentially reduces the
m=1 Aim jm , where A
2
QM
PM
number of parameters needed to express A from m=1 Im Jm to m=1 Im Jm . In matrix form, we
have mat(A) = A(M ) ? A(M ?1) ? ? ? ? ? A(1) , where ? is the Kronecker matrix product [1]. Note
that tensors in RIJ are not factorizable in general [2].
IJ
J
M
The product A ~ X of
Ptwo tensors A ? R and X ? R , where I, J ? N , M ? N, is given
by (A ~ X)i1 ???iM = j1 ???jM Ai1 ???iM j1 ???jM Xj1 ???jM . The tensor A is called a multilinear operator
when it appears in a tensor product as above. The product is only defined if the dimensionalities of
the last M modes of A match the dimensionalities of X. Note that this tensor product generalizes
the standard matrix-vector product in the case M = 1.
We shall primarily work with tensors in their vector and matrix representations. Hence, we appeal
to the following
Lemma 1. Let I, J ? NM , M ? N, A ? RIJ , X ? RJ . Then
vec(A ~ X) = mat(A) vec(X) .
Furthermore, if A is factorizable with matrices A , then
h
i
vec(A ~ X) = A(M ) ? ? ? ? ? A(1) vec(X) .
(1)
(m)
(2)
PM Qm?1
PM Qm?1
Proof. Let k = 1 + m=1 n=1 In (im ? 1) and l = 1 + m=1 n=1 Jn (jm ? 1) for some
(j1 , . . . , jM ). We have
X
X
vec(A ~ X)k =
Ai1 ???iM j1 ???jM Xj1 ???jM =
mat(A)kl vec(X)l = (mat(A) vec(X))k ,
j1 ???jM
l
which holds for all 1 ? im ? Im , 1 ? m ? M . Thus, (1) holds. To prove (2), we express mat(A)
as the Kronecker product of M matrices A(1) , . . . , A(M ) .
The Tucker decomposition can be expressed using the product ~ defined above.
The
Tucker decomposition models a given tensor X ? RI1 ?????IM as the result of a multilinear transformation that is applied to a latent core tensor Z ? RJ1 ?????JM : X = A ~ Z.
The multilinear operator A is a factorizable tensor such that
A(3) mat(A) = A(M ) ?A(M ?1) ?? ? ??A(1) ,. where A(1) , . . . , A(M )
are projection matrices (Figure 1). The canonical decomposi=
tion/parallel factors (CP) decomposition is a special case of the
(1)
(2)
X
Z A Tucker decomposition in which Z is ?superdiagonal?, i.e., J1 =
A
Figure 1: The Tucker decomposi- ? ? ? = JM = R and only the Zj1 ???jM such that j1 = ? ? ? = jM
tion of a third-order tensor X.
can be nonzero. The CP decomposition expresses X as a sum
PR
(m)
(M )
(1)
X = r=1 ur ? ? ? ? ? ur , where ur ? RIm for all m and r and ? denotes the tensor outer
product [3].
To illustrate, consider the case M = 2 and let X = A~Z, where X ? Rn?m and Z ? Rp?q . Then
X = AZB T , where mat(A) = B ? A. If p ? n and q ? m, then Z is a dimensionality-reduced
version of X: the matrix A increases the number of rows of Z from p to n via left-multiplication,
while the matrix B increases the number of columns of Z from q to m via right-multiplication. To
reconstruct X, we simply apply A ~ Z. See Figure 1 for an illustration of the case M = 3.
3
Random tensors
Given I ? NM , M ? N, we define a random tensor X ? RI1 ?????IM as follows. Suppose vec(X)
is normally distributed with expectation vec(U) and positive-definite covariance mat(S), where U ?
RI and S ? RII . Then we say that X has the normal distribution with expectation U ? RI and
covariance S ? RII and write X ? N (U, S). The definition of the normal distribution on tensors
can thus be restated more succinctly as
X ? N (U, S) ?? vec(X) ? N (vec U, mat S) .
(3)
Our formulation extends the normal distribution defined in [4], which is restricted to symmetric,
second-order tensors.
3
We will make use of an important special case of the normal distribution defined on tensors: the
multilinear Gaussian distribution. Let I, J ? NM , M ? N, and suppose X ? RI and Z ? RJ are
jointly distributed as
Z ? N (U, G) and X | Z ? N (C ~ Z, S) ,
(4)
where C ? RIJ . The marginal distribution of X and the posterior distribution of Z given X are given
by the following result.
Lemma 2. Let I, J ? NM , M ? N, and suppose the joint distribution of random tensors X ? RI
and Z ? RJ is given by (4). Then the marginal distribution of X is
X ? N C ~ U, C ~ G ~ CT + S ,
(5)
where CT ? RJI and CTj1 ???jM i1 ???iM = Ci1 ???iM j1 ???jM . The conditional distribution of Z given X is
? G
? ,
Z | X ? N U,
(6)
? = vec?1 (? + W (vec(X) ? mat(C) ?)), G
? = mat?1 (? ? W mat(C) ?), ? = vec(U),
where U
J
JJ
h
i?1
T
T
? = mat(G), ? = mat(S), and W = ?mat(C) mat(C) ?mat(C) + ?
.
Proof. Lemma 1, (3), and (4) imply that the vectorizations of Z and X given Z follow vec(Z) ?
N (?, ?) and vec(X) | vec(Z) ? N (mat(C) vec(Z) , ?). By the properties of the multivariate
normal distribution, the marginal distribution of vec(X) and the conditional distribution of vec(Z)
T
given vec(X) are vec(X) ? N (mat(C) vec(U), mat(C) ?mat(C) + ?) and vec(Z) | vec(X) ?
T
? mat(G)).
?
N (vec(U),
The associativity of ~ implies that mat(C ~ G ~ CT ) = mat(C) ?mat(C) .
Finally, we apply Lemma 1 once more to obtain (5) and (6).
4
Multilinear dynamical system
The aim is to develop a model of a tensor time series X1 , . . . , XN that takes into account tensor
structure. In defining the MLDS, we build upon the results of previous sections by treating each
Xn as a random tensor and relating the model components with multilinear transformations. When
the MLDS components are vectorized and matricized, an LDS with factorized transition and projection matrices is revealed. Hence, the strategy for fitting the MLDS is to vectorize each Xn , run the
expectation-maximization (EM) algorithm of the LDS for all components but the matricized transition and projection tensors?which are learned via an alternative gradient method?and finally convert
all model components back to tensor form.
4.1
Definition
Let I, J ? NM , M ? N. The MLDS model consists of a sequence Z1 , . . . , ZN of latent tensors,
where Zn ? RJ1 ?????JM for all n. Each latent tensor Zn emits an observation Xn ? RI1 ?????IM .
The system is initialized by a latent tensor Z1 distributed as
Z1 ? N (U0 , Q0 ) .
(7)
Given Zn , 1 ? n ? N ? 1, we generate Zn+1 according to the conditional distribution
Zn+1 | Zn ? N (A ~ Zn , Q) ,
(8)
where Q is the conditional covariance shared by all Zn , 2 ? n ? N , and A is the transition tensor
which describes the dynamics of the evolving sequence Z1 , . . . , ZN . The transition tensor A is
factorized into M matrices A(m) , each of which acts on a mode of Zn . In matrix form, we have
mat(A) = A(M ) ? ? ? ? ? A(1) . To each Zn there corresponds an observation Xn generated by
Xn | Zn ? N (C ~ Zn , R) ,
4
(9)
Z1 Z2
X1 X2
Zn
...
Zn+1
...
Xn+1
ZN
XN
where R is the covariance shared by all Xn and C is the projection tensor which multilinearly transforms the latent tensor Zn .
Like the transition tensor A, the projection tensor C is factorizable, i.e., mat(C) = C (M ) ? ? ? ? ? C (1) . See Figure 2 for an
illustration of the MLDS.
By vectorizing each Xn and Zn , the MLDS becomes an LDS
with factorized transition and projection matrices mat(A) and
mat(C). For the LDS, the transition and projection operators are
not factorizable in general [2]. The factorizations of A and C
for the MLDS not only allow for a generalized dimensionality
reduction of tensors but exponentially reduce the number of parameters of the transition and projecQM
QM
2
tion operators from |ALDS | + |C LDS | = m=1 Jm
+ m=1 Im Jm down to |AMLDS | + |CMLDS | =
PM
P
M
2
m=1 Jm +
m=1 Im Jm .
Xn
Figure 2: Schematic of the MLDS
with three modes.
4.2
Parameter estimation
Given a sequence of observations X1 , . . . , XN , we wish to fit the MLDS model by estimating
? = (U0 , Q0 , Q, A, R, C). Because the MLDS model contains latent variables Zn , we cannot directly maximize the likelihood of the data with respect to ?. The EM algorithm circumvents this
difficulty by iteratively updating (E(Z1 ), . . . , E(ZN )) and ? in an alternating manner until the expected, complete likelihood of the data converges [5]. The normal distribution of tensors (3) will
facilitate matrix and vector computations rather than compel us to work directly with tensors. In
particular, we can express the complete likelihood of the MLDS model as
L (? | Z1 , X1 , . . . , ZN , XN ) = L (vec ? | vec Z1 , vec X1 , . . . , vec ZN , vec XN ) ,
(10)
where vec ? = (vec U0 , mat Q0 , mat Q, mat A, mat R, mat C). It follows that the vectorized MLDS
is an LDS that inherits the Kalman filter updates for the E-step and the M-step for all parameters
except mat A and mat C. See [6] for the EM algorithm of the LDS.
Because A and C are factorizable, an alternative to the standard LDS updates is required. We
locally maximize the expected, complete log-likelihood
by computing the gradient with respect to
P
the vector v = [vec C (1)T ? ? ? vec C (M )T ]T ? R m Im Jm , which is obtained by concatenating the
vectorizations of the projection matrices C (m) . The expected, complete log-likelihood (with terms
constant with respect to C deleted) can be written as
n
h
io
T
l(v) =tr ?mat(C) ?mat(C) ? 2?T ,
(11)
? ?1 , ? = PN E(vec Zn vec ZT ), and ? = PN vec (Xn )E(vec Zn )T . Now
where ? = mat(R)
n
n=1
n=1
(m)
let k correspond to some Cij and let ?ij ? RIm ?Jm be the indicator matrix that is one at the
P
(i, j)th entry and zero elsewhere. The gradient ?l(v) ? R m Im Jm is given elementwise by
n
h
io
T
?l(v)k = 2tr ??vk mat(C) ?mat(C) ? ?T ,
(12)
where ?vk mat(C) = C (M ) ? ? ? ? ? ?ij ? ? ? ? ? C (1) [1]. If m = M , then we can exploit
Q the sparsity
of ?vk mat(C) by computing the trace of the product of two submatrices each with n6=M In rows
Q
and n6=M Jn columns:
h
iT
?l(v)k = 2tr C (M ?1) ? ? ? ? ? C (1) ?ij ,
(13)
Q
where ?ij is the submatrix of ? [mat(C) ? ? ?] with row indices (1, . . . , n6=M In ) shifted by
Q
Q
Q
n6=M In (i ? 1) and column indices (1, . . . ,
n6=M Jn ) shifted by
n6=M Jn (j ? 1). If m 6= M ,
then the ordering of the modes can be replaced by 1, . . . , m ? 1, m + 1, . . . , M, m and the rows and
columns of ? [mat(C) ? ? ?] can be permuted accordingly. In other words, the original tensors Xn
are ?rotated? so that the mth mode becomes the M th mode.
The M-step for A can be computed in a manner analogous to that of C by replacing I by J, replacing
?1
mat(C) by
mat(A), and substituting
v = [vec(Ah(1) )T ? ? ? vec(A(M ) )T i]T , ? = mat(Q) , ? =
i
PN ?1 h
PN ?1
T
T
into (11).
n=1 E vec(Zn ) vec(Zn ) , and ? =
n=1 E vec(Zn+1 ) vec(Zn )
5
4.3
Special cases of the MLDS and their relationships to existing models
It is clear that the MLDS is exactly an LDS in the case M = 1. Certain constraints on the MLDS
also lead to generalizations of factor analysis, probabilistic principal components analysis (PPCA),
the CP decomposition, and the matrix factorization model of collaborative filtering (MF). Let p =
QM
QM
m=1 Jm . If A = 0, U0 = 0, and Q0 = Q, then the Xn of the MLDS become
m=1 Im and q =
independent and identically distributed draws from the multilinear Gaussian distribution. Setting
mat(Q) = Idq and mat(R) to a diagonal matrix results in a model that reduces to factor analysis
in the case M = 1. A further constraint on R, mat(R) = ?2 Idp , yields a multilinear extension of
PPCA. Removing the constraints on R and forcing mat(Zn ) = Idq for all n results in a probabilistic
CP decomposition in which the tensor elements have general covariances. Finally, the constraint
M = 2 yields a probabilistic MF.
5
Experimental results
To determine how well the MLDS could model tensor time series, the fits of the MLDS were compared to those of the LDS for both synthetic and real data. To avoid unnecessary complexity and
highlight the difference between the two models?namely, how the transition and projection operators are defined?the noises in the models are isotropic. The MLDS parameters are initialized so
that U0 is drawn from the standard normal distribution, the matricizations of the covariance tensors are identity matrices, and the columns of each A(m) and C (m) are the first Jm eigenvectors of
singular-value-decomposed matrices with entries drawn from the standard normal distribution. The
LDS parameters are initialized in the same way by setting M = 1.
The prediction error and convergence in likelihood were measured for each dataset. For the
synthetic dataset, model complexity was also measured. The prediction error M
n of a given
th
model M for the n
member
of
a
tensor
time
series
X
,
.
.
.
,
X
is
the
relative
Euclidean
dis1
N
/ ||Xn ||, where ||?|| = ||vec(?)|| . Each estimate XM
=
is given by XM
tance Xn ? XM
n
n
n
2
M
M
M
M n
is
the
estimate
of
the
latent
state
,
where
E
Z
vec?1
mat
C
mat
A
vec
E
Z
Ntrain
Ntrain
I
of the last member of the training sequence. The convergence in likelihood of each model is determined by monitoring the marginal likelihood as the number of EM iterations increases. Each model
is allowed to run until the difference between consecutive log-likelihood values is less than 0.1%
of the latter value. Lastly, the model complexity is determined by observing how the likelihood
and prediction error of each model vary as the model size |?M | increases. Aside from the model
complexity experiment, the LDS latent dimensionality is always set to the smallest value such that
the number of parameters of the LDS is greater than or equal to that of the MLDS.
5.1
Results for synthetic data
The synthetic dataset is an MLDS with dimensions I = (7, 11), J = (3, 5), and N = 1100 and
parameters initialized as described in the first paragraph of this section. For the prediction error and
convergence analyses, the latent dimensionality of the MLDS for fitting was set to J = (3, 5) as
well. Each model was trained on the first 1000 elements and tested on the last 100 elements of the
sequence. The results are shown in Figure 3. According to Figure 3(a), the prediction error of MLDS
matches that of the true model and is below that of the LDS. Furthermore, the MLDS converges to
the likelihood of the true model, which is greater than that of the LDS (see Figure 3(b)). As for
model complexity, the model size needed for the MLDS to match the likelihood and prediction error
of the true model is much smaller than that of the LDS (see Figure 3(c) and (d)).
5.2
Results for real data
We consider the following datasets:
SST: A 5-by-6 grid of sea-surface temperatures from 5? N, 180? W to 5? S, 110? W recorded hourly
from 7:00PM on 4/26/94 to 3:00AM on 7/19/94, yielding 2000 epochs [7].
Tesla: Opening, closing, high, low, and volume of the stock prices of 12 car and oil companies
(e.g., Tesla Motors Inc.), from 6/29/10 to 5/10/13 (724 epochs).
NASDAQ-100: Opening, closing, adjusted-closing, high, low, and volume for 20 randomlychosen NASDAQ-100 companies, from 1/1/05 to 12/31/09 (1259 epochs).
6
5
6
1020
1060
Time slice
?3
0
5
10
15
20
Number of EM iterations
100
LDS
MLDS
true
50
0
0
1000
2000
Number of parameters
1000
2000
Number of parameters
(c)
(d)
Xn ? XM
/ ||Xn || is shown as a function of
Figure 3: Results for synthetic data. Prediction error M
n =
n
the time slice n in (a), convergence of marginal log-likelihood is shown in (b), marginal log-likelihood as a
P train +Ntest M
function of model size is shown in (c), and cumulative prediction error N
n=Ntrain+1 n as a function of model
size is shown in (d) for LDS, MLDS, and the true model.
20
0.8
0.8
0.6
0.4
0.2
1850 1900 1950 2000
Time slice
(b) Tesla
4
?8
5
10 15 20 25
Number of EM iterations
(e) SST
Log?likelihood
Log?likelihood
LDS
MLDS
0
1230
1250
Time slice
LDS
MLDS
?6
?3
10
20
30
40
Number of EM iterations
(f) Tesla
5
?0.5 x 10
?1
?2
1050 1100 1150
Time slice
(d) Video
0 x 10
?4
?8
1210
50
5
?2 x 10
?6
LDS
MLDS
100
(c) NASDAQ-100
4
?4 x 10
LDS
MLDS
0.6
0.2
705 710 715 720
Time slice
(a) SST
150
0.4
LDS
MLDS
Log?likelihood
0
1
Error
40
1
Error
LDS
MLDS
Error
Error
60
(b)
LDS
MLDS
20
40
60
Number of EM iterations
(g) NASDAQ-100
Log?likelihood
(a)
LDS
MLDS
true
?2
Cumulative error
LDS
MLDS
true
?2
?4
1100
Log?likelihood
Log?likelihood
Error
LDS
MLDS
true
0.5
0
?1 x 10
0 x 10
1
?1
LDS
MLDS
?1.5
20
40
60
Number of EM iterations
(h) Video
Figure 4: Results for LDS and MLDS applied to real data. The first row corresponds to prediction error M
n
as a function of the time slice n, while the second corresponds to convergence in log-likelihood. Sea-surface
temperature, Tesla, NASDAQ-100, and Video results are given by the respective columns.
Video: 1171 grayscale frames of ocean surf during low tide. This dataset was chosen because it
records a quasiperiodic natural scene.
For each dataset, MLDS achieved higher prediction accuracy and likelihood than LDS. For the SST
dataset, each model was trained on the first 1800 epochs; occlusions were filled in using linear
interpolation and refined with an extra step during the learning that replaced the estimates of the
occluded values by the conditional expectation given all the training data. For results when the
MLDS dimensionality is set to (3, 3), see Figure 4(a) and (e). For the Tesla dataset, each time series
((X1 )ij , . . . , (XN )ij ) were normalized prior to learning by subtracting by the mean and dividing by
the standard deviation. Each model was trained on the first 700 epochs. See Figure 4(b) and (f) for
results when the MLDS dimensionality is set to (5, 2). For the NASDAQ-100 dataset, each model
was trained on the first 1200 epochs. The data were normalized in the same way as with the Tesla
dataset. For results when the MLDS dimensionality is set to (10, 3), see Figure 4(c) and (g). For the
Video dataset, a 100-by-100 patch was selected, spatially downsampled to a 10-by-10 patch for each
frame, and normalized as before. Each model was trained on the first 1000 frames. See Figure 4(d)
and (h) for results when the MLDS dimensionality is set to (5, 5).
6
Related work
Several existing models can be fitted to tensor time series. If each tensor is ?vectorized?, i.e., reexpressed as a vector so that each element is indexed by a single positive integer, then an LDS can be
applied [8, 6]. An obvious limitation of the LDS for modeling tensor time series is that the tensor
structure is not preserved. Thus, it is less clear how the latent vector space of the LDS relates to the
various tensor modes. Further, one cannot postulate a latent dimension for each mode as with the
MLDS. The net result, as we have shown, is that the LDS requires more parameters than the MLDS
to model a given system (assuming it does have tensor structure).
7
Dynamic tensor analysis (DTA) and Bayesian probabilistic tensor factorization (BPTF) are explicit
models of tensor time series [9, 10]. For DTA, a latent, low-dimensional ?core? tensor and a set of
projection matrices are learned by processing each member Xn ? RI1 ?????IQM of the sequence as
(m)
follows. For each mode m, the tensor is flattened into a matrix Xn ? R( k6=m Ik )?Im and then
(m)T (m)
multiplied by its transpose. The result Xn Xn is added to a matrix S (m) that has accumulated
the flattenings of the previous n ? 1 tensors. The eigenvalue decomposition U ?U T of the updated
S (m) is then computed and the mth projection matrix is given by the first rank S (m) columns of
U . After this procedure is carried out for each mode, the core tensor is updated via the multilinear
transformation given by the Tucker decomposition. Like the LDS, DTA is a sequential model. An
advantage of DTA over the LDS is that the tensor structure of the data is preserved. A disadvantage
is that there is no straightforward way to predict future terms of the tensor time series. Another
disadvantage is that there is no mechanism that allows for arbitrary noise relationships among the
tensor elements. In other words, the noise in the system is assumed to be isotropic.
Other families of isotropic models have been devised that ?tensorize? the time dimension by concatenating the tensors in the time series to yield a single new tensor with an additional temporal
mode. These models include multilinear principal components analysis [11], the memory-efficient
Tucker algorithm [12], and Bayesian tensor analysis [13]. For fitting to data, such models rely on
alternating optimization methods, such as alternating least squares, which are applied to each mode.
BPTF allows for prediction and more general noise modeling than DTA. BPTF is a multilinear extension of collaborative filtering models [14, 15, 16] that concatenates the members of the tensor time series (Xn ), Xn ? RI1 ?????IM , to yield a higher-order tensor R ?
RI1 ?????IM ?K , where K is the sequence length. Each element of R is independently distributed
(M )
(1)
as Ri1 ???iM k ? N (hui1 , . . . , uiM , Tk i, ??1 ), where h?, . . . , ?i denotes the tensor inner product
and ? is a global precision parameter. Bayesian methods are then used to compute the canonicalPR
(M )
(1)
decomposition/parallel-factors (CP) decomposition of R: R = r=1 ur ?? ? ??ur ?Tr , where ? is
(m)
the tensor outer product. Each ur is independently drawn from a normal distribution with expectation ?m and precision matrix ?m , while each Tr is recursively drawn from a normal distribution
with expectation Tr?1 and precision matrix ?T . The parameters, in turn, have conjugate prior distributions whose posterior distributions are sampled via Markov-chain Monte Carlo for model fitting.
Though BPTF supports prediction and general noise models, the latent tensor structure is limited.
Other models with anisotropic noise include probabilistic tensor factorization (PTF) [17], tensor
probabilistic independent component analysis (TPICA) [18], and generalized coupled tensor factorization (GCTF) [19]. As with BPTF, PTF and TPICA utilize the CP decomposition of tensors. PTF
is fit to tensor data by minimizing a heuristic loss function that is expressed as a sum of tensor inner
products. TPICA iteratively flattens the tensor of data, executes a matrix model called probabilistic
ICA (PICA) as a subroutine, and decouples the factor matrices of the CP decomposition that are embedded in the ?mixing matrix? of PICA. GCTF relates a collection of tensors by a hidden layer of
disconnected tensors via tensor inner products, drawing analogies to probabilistic graphical models.
7
Conclusion
We have proposed a novel probabilistic model of tensor time series called the multilinear dynamical
system (MLDS), based on a tensor normal distribution. By putting tensors and multilinear operators
in bijective correspondence with vectors and matrices in a way that preserves tensor structure, the
MLDS is formulated so that it becomes an LDS when its components are vectorized and matricized.
In matrix form, the transition and projection tensors can each be written as the Kronecker product of
M smaller matrices and thus yield an exponential reduction in model complexity compared to the
unfactorized transition and projection matrices of the LDS. As noted in Section 4.3, the MLDS generalizes the LDS, factor analysis, PPCA, the CP decomposition, and low-rank matrix factorization.
The results of multiple experiments that assess prediction accuracy, convergence in likelihood, and
model complexity suggest that the MLDS achieves a better fit than the LDS on both synthetic and
real datasets, given that the LDS has the same number of parameters as the MLDS.
8
References
[1] Jan R. Magnus and Heinz Neudecker. Matrix Differential Calculus with Applications in Statistics and
Econometrics. Wiley, revised edition, 1999.
[2] Vin De Silva and Lek-Heng Lim. Tensor rank and the ill-posedness of the best low-rank approximation
problem. SIAM Journal on Matrix Analysis and Applications, 30(3):1084?1127, 2008.
[3] Tamara G. Kolda. Tensor decompositions and applications. SIAM Review, 51(3):455?500, 2009.
[4] Peter J. Basser and Sinisa Pajevic. A normal distribution for tensor-valued random variables: applications
to diffusion tensor MRI. IEEE Transactions on Medical Imaging, 22(7):785?794, 2003.
[5] Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1?38, 1977.
[6] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 1st edition, 2006.
[7] NOAA/Pacific Marine Environmental Laboratory. Tropical Atmosphere Ocean Project. http://www.
pmel.noaa.gov/tao/data_deliv/deliv.html. Accessed: May 23, 2013.
[8] Zoubin Ghahramani and Geoffrey E. Hinton. Parameter estimation for linear dynamical systems. Technical Report CRG-TR-96-2, University of Toronto Department of Computer Science, 1996.
[9] Jimeng Sun, Dacheng Tao, and Christos Faloutsos. Beyond streams and graphs: dynamic tensor analysis.
In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, pages 374?383. ACM, 2006.
[10] Liang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff Schneider, and Jaime G. Carbonell. Temporal collaborative
filtering with Bayesian probabilistic tensor factorization. In Proceedings of SIAM Data Mining, 2010.
[11] Haipin Lu, Konstantinos N. Plataniotis, and Anastasios N. Venetsanopoulos. MPCA: Multilinear principal
components analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 2008.
[12] Tamara Kolda and Jimeng Sun. Scalable tensor decompositions for multi-aspect data mining. In Eighth
IEEE International Conference on Data Mining. IEEE, 2008.
[13] Dacheng Tao, Mingli Song, Xuelong Li, Jialie Shen, Jimeng Sun, Xindong Wu, Christos Faloutsos, and
Stephen J. Maybank. Bayesian tensor approach for 3-D face modeling. IEEE Transactions on Circuits
and Systems for Video Technology, 18(10):1397?1410, 2008.
[14] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30?37, 2009.
[15] Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, volume 20, pages 1257?1264, 2008.
[16] Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using Markov chain
Monte Carlo. In Proceedings of the 25th International Conference on Machine Learning. ACM, 2008.
[17] Cyril Goutte and Massih-Reza Amini. Probabilistic tensor factorization and model selection. In Tensors,
Kernels, and Machine Learning (TKLM 2010), pages 1?4, 2010.
[18] Christian F. Beckmann and Stephen M. Smith. Tensorial extensions of independent component analysis
for multisubject FMRI analysis. Neuroimage, 25(1):294?311, 2005.
[19] Y. Kenan Yilmaz, A. Taylan Cemgil, and Umut Simsekli. Generalized coupled tensor factorization. In
Neural Information Processing Systems. MIT Press, 2011.
9
| 5117 |@word mri:1 version:1 humidity:1 tensorial:1 calculus:1 covariance:9 decomposition:18 tr:7 recursively:1 reduction:2 series:25 contains:1 outperforms:1 existing:2 z2:1 yet:1 written:3 must:1 tenet:1 j1:13 compel:1 christian:1 motor:1 treating:1 update:2 aside:1 selected:1 ntrain:3 accordingly:1 isotropic:3 marine:1 smith:1 core:4 record:1 toronto:1 accessed:1 mathematical:1 constructed:1 become:1 differential:1 ik:1 shorthand:1 prove:1 consists:1 fitting:4 paragraph:1 manner:2 multisubject:1 ica:1 expected:3 market:1 frequently:1 multi:1 heinz:1 salakhutdinov:2 decomposed:1 company:5 gov:1 jm:34 equipped:1 becomes:3 project:1 estimating:1 circuit:1 factorized:3 azb:1 juxtaposing:1 transformation:3 temporal:2 berkeley:3 every:1 multidimensional:3 act:1 exactly:1 decouples:1 qm:10 normally:1 medical:1 positive:4 hourly:1 before:1 referenced:1 treat:1 cemgil:1 io:2 interpolation:1 might:1 collapse:1 factorization:14 limited:1 practice:1 yehuda:1 definite:1 procedure:1 jan:1 evolving:3 submatrices:1 bell:1 projection:15 word:3 donald:1 downsampled:1 suggest:1 zoubin:1 cannot:2 yilmaz:1 selection:1 operator:8 put:1 www:1 jaime:1 straightforward:1 independently:2 restated:1 shen:1 array:4 traditionally:1 justification:1 coordinate:3 analogous:1 updated:2 kolda:2 suppose:3 rj1:2 trend:1 element:14 recognition:1 updating:1 econometrics:1 observed:2 rij:6 sun:3 ordering:1 russell:2 matricizations:1 dempster:1 complexity:7 occluded:1 dynamic:5 trained:5 mingli:1 algebra:2 upon:2 joint:1 stock:5 various:1 train:1 describe:1 monte:2 artificial:1 query:1 gctf:2 refined:1 whose:1 quasiperiodic:1 heuristic:1 valued:1 say:1 drawing:1 reconstruct:1 statistic:1 jointly:1 jialie:1 laird:1 sequence:12 eigenvalue:1 advantage:1 net:1 propose:1 subtracting:1 product:17 massih:1 mixing:1 convergence:6 sea:2 converges:2 rotated:1 object:2 help:1 illustrate:1 develop:1 tk:1 measured:3 ij:8 longitude:1 dividing:1 c:1 indicate:1 implies:1 idp:1 filter:1 rogers:1 require:1 atmosphere:1 ci1:1 generalization:1 elevation:1 multilinear:20 im:41 adjusted:2 extension:3 crg:1 hold:2 magnus:1 normal:12 taylan:1 bptf:5 predict:1 substituting:1 achieves:3 consecutive:1 vary:1 smallest:1 estimation:3 ruslan:2 faithfully:1 tool:1 reflects:1 mit:1 gaussian:3 always:1 aim:2 i3:6 rather:1 pn:4 avoid:1 venetsanopoulos:1 inherits:1 vk:3 methodological:1 rank:4 likelihood:25 sigkdd:1 am:1 economy:1 accumulated:1 nasdaq:6 associativity:1 hidden:2 mth:2 subroutine:1 i1:7 tao:3 pixel:2 overall:1 among:3 ill:1 html:1 k6:1 special:3 marginal:7 equal:3 comprise:1 once:1 represents:1 stuart:1 future:2 fmri:1 others:1 report:1 opening:3 primarily:1 preserve:1 replaced:2 occlusion:1 investigate:1 mining:4 mnih:2 ai1:4 yielding:1 chain:2 daily:1 arthur:1 respective:1 indexed:2 filled:1 euclidean:1 incomplete:1 initialized:4 fitted:2 column:7 modeling:8 disadvantage:2 zn:30 maximization:2 deviation:1 subset:1 entry:3 ri1:16 successful:1 inadequate:1 eec:1 spatiotemporal:1 synthetic:7 st:1 international:3 siam:3 probabilistic:15 off:1 xi1:3 again:1 postulate:1 nm:9 recorded:1 tzu:1 huang:1 rji:1 li:2 account:1 de:1 stabilize:1 inc:1 explicitly:1 stream:1 later:1 tion:3 wind:1 observing:1 investor:1 capability:1 parallel:2 vin:1 reexpressed:1 collaborative:3 ass:1 square:1 accuracy:3 correspond:1 yield:5 matricized:3 lds:48 bayesian:6 lu:1 carlo:2 monitoring:1 executes:1 ah:1 inform:1 definition:2 volinsky:1 tucker:8 tamara:2 obvious:1 proof:2 emits:1 ppca:3 dataset:10 sampled:1 lim:1 car:2 dimensionality:13 knowledge:1 shaping:1 agreed:1 rim:3 noaa:2 back:1 appears:1 higher:3 follow:1 formulation:1 though:1 ptf:3 furthermore:2 lastly:1 until:2 tropical:1 replacing:2 christopher:1 lack:1 mode:15 xindong:1 perhaps:1 lei:1 oil:2 facilitate:1 xj1:2 concept:1 true:8 normalized:3 spell:1 hence:2 read:1 symmetric:1 nonzero:1 q0:4 iteratively:2 i2:5 alternating:3 climate:2 spatially:1 adjacent:1 laboratory:1 during:2 uniquely:1 noted:1 occasion:1 generalized:3 bijective:2 complete:4 demonstrate:1 cp:8 temperature:6 silva:1 novel:2 common:2 permuted:1 tabulation:1 reza:1 exponentially:2 volume:4 ptwo:1 anisotropic:1 relating:1 elementwise:1 measurement:1 dacheng:2 vec:55 maybank:1 grid:3 pm:8 closing:5 surface:2 posterior:2 multivariate:1 forcing:1 scenario:1 certain:1 tide:1 preserving:1 greater:2 additional:1 schneider:1 determine:1 maximize:2 u0:5 relates:2 multiple:2 full:1 rj:3 reduces:2 anastasios:1 stephen:2 technical:1 match:4 devised:1 schematic:1 prediction:16 variant:1 scalable:1 expectation:7 iteration:6 sometimes:1 kernel:1 achieved:1 preserved:2 addressed:1 basser:1 singular:1 extra:1 member:6 incorporates:1 seem:1 integer:3 structural:1 revealed:1 identically:1 affect:3 fit:4 alds:1 iqm:1 andriy:2 reduce:1 inner:3 konstantinos:1 whether:1 forecasting:1 song:1 peter:1 cyril:1 jj:1 clear:2 eigenvectors:1 sst:4 transforms:1 locally:1 simplest:1 reduced:1 generate:1 http:1 canonical:1 shifted:2 track:1 discrete:1 write:1 mat:59 shall:1 express:4 putting:1 deleted:1 drawn:4 prevent:1 diffusion:1 utilize:1 imaging:1 graph:1 sum:2 convert:1 run:2 inverse:1 extends:1 family:1 wu:1 patch:2 circumvents:1 draw:1 submatrix:1 layer:1 ct:3 nan:1 koren:1 correspondence:2 assemble:1 occur:1 kronecker:3 constraint:4 ri:6 x2:1 scene:1 simsekli:1 neudecker:1 aspect:1 speed:1 ctj1:1 department:2 pacific:1 according:2 pica:2 disconnected:1 dta:5 conjugate:1 describes:1 smaller:2 em:11 ur:6 restricted:1 pr:1 goutte:1 turn:1 mechanism:1 needed:2 vectorizations:2 available:1 generalizes:4 permit:2 multiplied:1 apply:2 away:1 amini:1 ocean:4 xiong:1 alternative:2 faloutsos:2 rp:1 jn:5 original:2 denotes:2 include:2 graphical:2 exploit:1 ghahramani:1 especially:1 build:1 society:1 tensor:134 added:1 flattens:2 strategy:1 diagonal:1 gradient:3 concatenation:1 outer:2 carbonell:1 chris:1 vectorize:1 assuming:1 kalman:1 length:1 index:2 relationship:5 illustration:2 beckmann:1 minimizing:1 liang:1 cij:1 robert:1 trace:1 rii:2 zt:1 perform:1 recommender:1 observation:5 revised:1 datasets:3 markov:2 finite:1 defining:1 hinton:1 jimeng:3 frame:4 rn:1 arbitrary:2 posedness:1 atmospheric:1 namely:1 subvector:1 kl:2 required:1 z1:8 california:1 learned:2 beyond:1 dynamical:9 below:1 xm:4 pattern:1 latitude:1 eighth:1 sparsity:1 royal:1 tance:1 video:8 memory:1 difficulty:1 natural:1 rely:1 indicator:1 technology:1 imply:1 carried:1 n6:6 coupled:2 epoch:7 prior:2 vectorizing:1 review:1 discovery:1 multiplication:2 relative:1 embedded:1 loss:1 expect:1 highlight:1 limitation:1 filtering:3 analogy:1 geoffrey:1 mpca:1 vectorized:4 rubin:1 heng:1 share:1 row:5 succinctly:1 elsewhere:1 last:3 transpose:1 allow:1 face:1 distributed:5 slice:8 dimension:4 xn:28 world:1 transition:11 cumulative:2 computes:1 kenan:1 made:3 collection:2 far:1 transaction:3 umut:1 global:1 unnecessary:1 assumed:1 xi:1 grayscale:2 latent:19 vectorization:1 matricization:2 lek:1 concatenates:1 requested:1 meanwhile:1 electric:1 factorizable:6 surf:1 significance:1 noise:11 edition:2 allowed:1 tesla:7 x1:6 wiley:1 precision:3 christos:2 neuroimage:1 wish:1 explicit:1 concatenating:2 exponential:1 plataniotis:1 tensorize:1 third:3 uniting:1 down:1 removing:1 bishop:1 symbol:1 r2:2 appeal:1 zj1:1 dis1:1 sequential:1 flattened:1 forecast:1 chen:1 mf:2 simply:1 sinisa:1 expressed:2 ordered:1 scalar:1 springer:1 corresponds:3 environmental:1 extracted:1 acm:3 conditional:5 identity:1 formulated:1 jeff:1 price:5 shared:2 specifically:1 except:1 determined:2 lemma:4 principal:3 called:5 kuo:1 ntest:1 experimental:1 uim:1 formally:1 mark:1 support:1 latter:1 tested:1 correlated:1 |
4,551 | 5,118 | What do row and column marginals reveal about
your dataset?
Behzad Golshan
Boston University
[email protected]
John W. Byers
Boston University
[email protected]
Evimaria Terzi
Boston University
[email protected]
Abstract
Numerous datasets ranging from group memberships within social networks to
purchase histories on e-commerce sites are represented by binary matrices. While
this data is often either proprietary or sensitive, aggregated data, notably row and
column marginals, is often viewed as much less sensitive, and may be furnished
for analysis. Here, we investigate how these data can be exploited to make inferences about the underlying matrix H. Instead of assuming a generative model for
H, we view the input marginals as constraints on the dataspace of possible realizations of H and compute the probability density function of particular entries
H(i, j) of interest. We do this for all the cells of H simultaneously, without generating realizations, but rather via implicitly sampling the datasets that satisfy the
input marginals. The end result is an efficient algorithm with asymptotic running
time the same as that required by standard sampling techniques to generate a single dataset from the same dataspace. Our experimental evaluation demonstrates
the efficiency and the efficacy of our framework in multiple settings.
1
Introduction
Online marketplaces such as Walmart, Netflix, and Amazon store information about their customers
and the products they purchase in binary matrices. Likewise, information about the groups that social
network users participate in, the ?Likes? they make, and the other users they ?follow? can also be
represented using large binary matrices. In all these domains, the underlying data (i.e., the binary
matrix itself) is often viewed as proprietary or as sensitive information. However, the data owners
may view certain aggregates as much less sensitive. Examples include revealing the popularity of a
set of products by reporting total purchases, revealing the popularity of a group by reporting the size
of its membership, or specifying the in- and out-degree distributions across all users.
Here, we tackle the following question: ?Given the row and column marginals of a hidden binary
matrix H, what can one infer about H??.
Optimization-based methods for addressing this question, e.g., least squares or maximum likelihood,
b of H. However, this
assume a generative model for the hidden matrix and output an estimate H
b may
estimate gives little guidance as to the structure of the feasible solution space; for example, H
be one of many solutions that achieve the same value of the objective function. Moreover, these
methods provide little insight about the estimates of particular entries H(i, j).
In this paper, we do not make any assumptions about the generative process of H. Rather, we
approach the above question by viewing the row and column marginals as constraints that induce
a dataspace X , defined by the set of all matrices satisfying the input constraints. Then, we explore
this dataspace not by estimating H at large, but rather by computing the entry-wise PDF P(i, j), for
every entry (i, j), where we define P(i, j) to be the probability that cell (i, j) takes on value 1 in the
datasets in X . From the application point of view, the value of P(i, j) can provide the data analyst
1
with valuable insight: for example, values close to 1 (respectively 0) give high confidence to the
analyst that H(i, j) = 1 (respectively H(i, j) = 0).
A natural way to compute entry-wise PDFs is by sampling datasets from the induced dataspace
X . However, this dataspace can be vast, and existing techniques for sampling binary tables with
fixed marginals [6, 9] fail to scale. In this paper, we propose a new efficient algorithm for computing
entry-wise PDFs by implicitly sampling the dataspace X . Our technique can compute the entry-wise
PDFs of all entries in running time the same as that required for state-of-the art sampling techniques
to generate just a single sample from X . Our experimental evaluation demonstrates the efficiency
and the efficacy of our technique for both synthetic and real-world datasets.
Related work: To the best of our knowledge, we are the first to introduce the notion of entrywise PDFs for binary matrices and to develop implicit sampling techniques for computing them
efficiently. However, our work is related to the problem of sampling from the space of binary
matrices with fixed marginals, studied extensively in many domains [2, 6, 7, 9, 21], primarily due to
its applications in statistical significance testing [14, 17, 20]. Existing sampling techniques all rely
on explicitly sampling the underlying dataspaces (either using MCMC or importance sampling) and
while these methods can be used to compute entry-wise PDFs, they are inefficient for large datasets.
Other related studies focus on identifying interesting patterns in binary data given itemset frequencies or other statistics [3, 15]. These works either assume a generative model for the data or build
the maximum entropy distribution that approximates the observed statistics; whereas our approach
makes no such assumptions and focuses only on exact solutions. Finally, considerable work has
focused on counting binary matrices with fixed marginals [1, 8, 10, 23]. One can compute the
entry-wise PDFs using these results, albeit in exponential time.
2
Dataspace Exploration
Throughout the rest of the discussion we will assume an n ? m 0?1 matrix H, which is hidden. The
input to our problem consists of the dimensionality of H and its row and column marginals provided
as a pair of vectors hr, ci. That is, r and c are n-dimensional and m-dimensional integer vectors
respectively; entry r(i) stores the number of 1s in the ith row of H, and similarly for c(i). In this
paper we address the following high-level problem:
Problem 1. Given hr, ci, what can we infer about H? More specifically, can we reason about
entries H(i, j) without access to H but only its row and column marginals?
Clearly, there are many possible ways of formalizing the above problem into a concrete problem
definition. In Section 2.1 we describe some mainstream formulations and discuss their drawbacks.
In Section 2.2 we introduce our dataspace exploration framework that overcomes these drawbacks.
2.1
Optimization?based approaches
Standard optimization-based approaches for Problem 1 usually assume a generative model for H,
b the best estimate of H optimizing a specific objective function
and estimate it by computing H,
(e.g., likelihood, squared error). Instantiations of these methods for our setting are described next.
Maximum-likelihood (ML): The ML approach assumes that the hidden matrix H is generated by
a model that only depends on the observed marginals. Then, the goal is to find the model parameters
b of H while maximizing the likelihood of the observed row and column
that provide an estimate H
marginals. A natural choice of such a model for our setting is the Rasch model [4, 19], where the
probability of entry H(i, j) taking on value 1 is given by:
e?i ??j
Pr [H(i, j) = 1] =
.
1 + e?i ??j
The maximum-likelihood estimates of the (n + m) parameters ?i and ?j of this model can be
computed in polynomial time [4, 19]. For the rest of the discussion, we will use the term Rasch to
refer to the experimental method that computes an estimate of H using this ML technique.
Least-squares (LS): One can view the task of estimating H(i, j) from the input aggregates as
solving a linear system defined by equations: r = H ? ~1 and c = H T ? ~1 Unfortunately, such
2
a system of equations is typically highly under-determined and standard LS methods approach it
b of H to minimize F (H)
b = ||(H
b ? ~1) ?
as a regression problem that asks for an estimate H
T
b
b
~
r||F + ||(H ? 1) ? c||F , where || ||F is the Frobenius norm [13]. Even when the entries of H
are restricted to be in [0, 1], it is not guaranteed that the above regression-based formulation will
output a reasonable estimate of H. For example, all tables with row and column marginals r and
c are 0-error solutions; yet there may be exponentially many such matrices. Alternatively, one can
b that minimizes F (H)
b + J(H).
b For the
incorporate a ?regularization? factor J() and search for H
2
b
b
rest of this paper, we consider this latter approach with J(H) = (H(i, j) ? h) , where h is the
b We refer to this approach as the LS method.
average value over all entries of H.
Although one can solve (any of) the above estimation problems via standard optimization techniques, the output of such methods is a holistic estimate of H that gives no insight on how many
b with the same value of the objective function exist or the confidence in the value of
solutions H
every cell. Moreover, these techniques are based on assumptions about the generative model of the
hidden data. While these assumptions may be plausible, they may not be valid in real data.
2.2
The dataspace exploration framework
To overcome the drawbacks of the optimization-based methods, we now introduce our dataspace
exploration framework, which does not make any structural assumptions about H and considers the
set of all possible datasets that are consistent with input row and column marginals hr, ci. We call
the set of such datasets the hr, ci-dataspace, denoted by Xhr,ci , or X for short.
We translate the high-level Problem 1 into the following question: Given hr, ci, what is the probability that the entry H(i, j) of the hidden dataset takes on value 1? That is, for each entry H(i, j)
we are interested in computing the quantity:
X
P(i, j) =
Pr(H 0 )Pr [H 0 (i, j) = 1] .
(1)
H 0 ?X
Here, Pr(H 0 ) encodes the prior probability distribution over all hidden matrices in X . For a uniform
prior, P(i, j) encodes the fraction of matrices in X that have 1 in position (i, j). Clearly, for binary
matrices, P(i, j) determines the PDF of the values that appear in cell (i, j). Thus, we call P(i, j) the
entry-wise PDF of entry (i, j), and P the PDF matrix. If P(i, j) is very close to 1 (or 0), then over
all possible instantiations of H, the entry (i, j) is, with high confidence, 1 (or 0). On the other hand,
P(i, j) ' 0.5 signals that in the absence of additional information, a high-confidence prediction of
entry H(i, j) cannot be made.
Next, we discuss algorithms for estimating entry-wise PDFs efficiently. Throughout the rest of the
discussion we will adopt Matlab notation for matrices: for any matrix M , we will use M (i, :) to
refer to the i-th row, and M (:, j) to refer to the j-th column of M .
3
Basic Techniques
First, we review some basic facts and observations about hr, ci and the dataspace Xhr,ci .
Validity of marginals: Given hr, ci we can decide in polynomial time whether |Xhr,ci | > 0 either
by verifying the Gale-Ryser condition [5] or by constructing a binary matrix with the input row and
column marginals, as proposed by Erd?os, Gallai, and Hakimi [18, 11]. The second option has the
comparative advantage that if |Xhr,ci | > 0, then it also outputs a binary matrix from Xhr,ci .
Nested matrices: Building upon existing results [18, 11, 16, 24], we have the following:
Lemma 1. Given the row and column marginals of a binary matrix as hr, ci we can decide in
polynomial time whether |Xhr,ci | = 1 and if so, completely recover the hidden matrix H.
The binary matrices that can be uniquely recovered are called nested matrices and have the property
that in their representation as bipartite graphs they do not have any switch boxes [16]: a pair of edges
(u, v) and (u0 , v 0 ) for which neither (u, v 0 ) nor (u0 , v) exist.
3
Explicit sampling: One way of approximating P(i, j) for large dataspaces is to first obtain a uniform
sample of N binary matrices from X : X1 , . . . , XN and for each (i, j), compute P(i, j) as the
fraction of samples for which X` (i, j) = 1.
We can obtain random (near-uniform) samples from X using either the Markov chain Monte
Carlo (MCMC) method proposed by Gionis et al. [9] or the Sequential Importance Sampling
(Sequential) algorithm proposed by Chen et al. [6]. MCMC guarantees uniformity of the samples, but it does not converge in polynomial time. Sequential produces near-uniform samples in
polynomial time, but it requires O(n3 m) time per sample and thus using this algorithm to produce
N samples (N >> n) is beyond practical consideration. To recap, explicit sampling methods are
impractical for large datasets; moreover, their accuracy depends on the number of samples N and
the size of the dataspace X , which itself is hard to estimate.
4
4.1
Computing entry-wise PDFs
Warmup: The SimpleIS algorithm
With the aim to provide some intuition and insight, we start by presenting a simplified version of our algorithm called SimpleIS, also shown in Algorithm 1.
SimpleIS computes the P matrix one column at a
time, in arbitrary order. Each such computation conAlgorithm 1 The SimpleIS algorithm.
sists of two steps: (a) propose and (b) adjust. The
Input: hr, ci
Propose step associates with every row i, weight
Output: Estimate of the PDF matrix P
w(i) that is proportional to the row marginal of row
1:
w = Propose(r)
i. A naive way of assigning these weights is by setr(i)
2: for j = 1 . . . m do
ting w(i) = m . We refer to these weights w as the
x = c(j)
raw probabilities. The Adjust step takes as input 3:
4:
px = Adjust(w, x)
the column sum x = c(j) of the jth column and the
5:
P(:, j) = px
raw probabilities w and adjusts these probabilities
into the final P
probabilities px such that for column j
we have that i px (i) = x. This adjustment is not a simple normalization, but it computes the final
values of px (i) by implicitly considering all possible realizations of the jth column with column
sum x and computing the probability that the ith cell of that column is equal to 1.
Formally, if we use x to denote the binary vector that represents one realization of the j-th column
of the hidden matrix, then px (i) is computed as:
"
#
n
X
px (i) := Pr x(i) = 1 |
x(i) = x .
(2)
i=1
It can be shown [6] that Equation (2) can be evaluated in polynomial time as follows: for any vector
x, let N = {1, . . . , n}, be the set of all possible positions of 1s within x, and let R(x, N ) be the
probability that exactly x elements of N are set to 1, i.e.,
"
#
X
R(x, N ) := Pr
x(i) = x .
i?N
Using this definition, px (i) is then derived as follows:
px (i) =
w(i)R(x ? 1, N \ {i})
.
R(x, N )
(3)
The evaluation of all of the necessary terms R( , ) can be accomplished by the following dynamicprogramming recursion: for all a ? {1, . . . x}, and for all B and i such that |B| > a and i ? B ? N :
R(a, B) = (1 ? w(i))R(a, B \ {i}) + w(i)R(a ? 1, B \ {i}).
Running time: The Propose step is linear in the number of the rows and the Adjust evaluates
Equation (3) and thus needs at most O(n2 ) time. Thus, SimpleIS runs in time O(mn2 ).
Discussion: A natural question to consider is: why could the estimates of P produced by SimpleIS
be inaccurate?
4
0
4
To answer this question, consider a hidden 5 ? 5 binary table
0
4
with r = (4, 4, 2, 2, 2) and c = (2, 3, 1, 4, 4) and assume that
0
2
SimpleIS starts by computing the entry-wise PDFs of the first
1
2
column. While evaluating Equation (2), SimpleIS generates all
1
2
possible columns of matrices with row marginals r and a column
2 3 1 4 4
with column sum 2 ? ignoring the values of the rest of the column
marginals. Thus, the realization of the first column shown in the matrix on the right is taken into
consideration by SimpleIS, despite the fact that it cannot lead to a matrix that respects r and c.
This is because four more 1s need to be placed in the empty cells of the first two rows which in turn
would lead to a violation of the column marginal of the third column. This situation occurs exactly
because SimpleIS never considers the constraints imposed by the rest of the column marginals
when aggregating the possible realizations of column j.
Ultimately, the SimpleIS algorithm results in estimates px (i) that reflect an entry i in a column
being equal to 1 conditioned over all matrices with row marginals r and a single column with column
sum x [6]. But this dataspace is not our target dataspace.
4.2
The IS algorithm
In the IS algorithm, we remedy this weakness of SimpleIS by taking into account the constraints
imposed by all the column marginals when aggregating the realization of a particular column j.
Referring again to the previous example, the input vectors r and c impose the following constraints
on any matrix in Xhr,ci : column 1 must have least one 1 in the first two rows and (exactly) two 1s in
the first five rows. These types of constraints, known as knots, are formally defined as follows.
Definition 1. A knot is a subvector of a column characterized by three integer values h[s, e] | bi,
where s and e are the starting and ending indices defining the subvector, and b is a lower bound on
the number of 1s that must be placed in the first e rows of the column.
Interestingly, given hr, ci, the knots of any column j of the hidden matrix can be identified in linear
time using an algorithm that recursively applies the Gale-Ryser condition on realizability of bipartite
graphs. This method, and the notion of knots, were first introduced by Chen et al. [6].
At a high level, IS (Algorithm 2) identifies the knots within each column and uses them
to achieve a better estimation of P. Here, the process of obtaining the final probabilities
is more complicated since it requires: (a) identifying the knots of every column j (line 3),
(b) computing the entry-wise PDFs for the entries in every knot (denoted by qk ) (lines 4-7), Algorithm 2 The IS algorithm.
and (c) creating the jth column of P by putting
Input: hr, ci
the computed entry-wise PDFs back together
Output: Estimate of the PDF matrix P
(line 8). Note that we use wk to refer to the
1: w = Propose(r)
vector of raw probabilities associated with cells
2: for j = 1 . . . m do
in knot k. Also, vector pk,x is used to store the
3:
FindKnots(j, r, c)
adjusted probabilities of cells in knot k given
4:
for each knot k ? {1 . . . l} do
that x 1s are placed within the knot.
5:
for x: number of 1s in knot k do
Step (a) is described by Chen et al. [6], and
6:
pk,x = Adjust(wk , x)
step (c) is straightforward, so we focus on (b),
7:
qk = Ex [pk,x ]
which is the main part of IS. This step con8:
P(:, j) = [q1 ; . . . ; ql ]
siders all the knots of the jth column sequentially. Assume that the kth knot of this column
is given by the tuple h[sk , ek ] | bk i. Let x be the number of 1s inside this knot. If we know the value
of x, then we can simply use the Adjust routine to adjust the raw probabilities wk . But since the
value of x may vary over different realizations of column j, we need to compute the probability
distribution of different values of x. For this, we first observe that if we know that y 1s have been
placed prior to knot k, then we can compute lower and upper bounds on x as:
Lk|y = max {0, bk ? y} , Uk|y = min {ek ? sk + 1, c(j) ? y} .
Clearly, the number of 1s in the knot must be an integer value in the interval [Lk|y , Uk|y ]. Lacking
prior knowledge we assume that x takes any value in the interval uniformly at random. Thus, the
5
0
10
Error
Error
?2
10
LS
MCMC
Sequential
Rasch
SimpleIS
IS
1
10
Running Time (secs)
LS
MCMC
Sequential
Rasch
SimpleIS
IS
0
10
?1
10
?4
10
?2
10
?6
10
0
?3
20
40
60
80
100
10
2
10
0
10
?2
0
20
Matrix Size (n)
(a) Blocked matrices
LS
MCMC
Sequential
Rasch
SimpleIS
IS
4
10
40
60
80
100
Matrix Size (n)
(b) Matrices with knots
10
0
20
40
60
80
100
Matrix Size (n)
(c) Relative running times
Figure 1: Panels (a) and (b) depict Error (log scale) for six different algorithms, on two classes of
matrices. Panel (c) depicts algorithmic running times.
probability of x 1s occurring inside the knot, given the value of y (i.e., 1s prior to the knot) is:
1
Pk (x|y) =
Uk|y ? Lk|y + 1
(4)
Based on this conditional probability we can write the probability of each value of x as
Pk (x) =
c(j)
X
Qk (y)Pk (x|y),
(5)
y=0
in which Pk (x|y) is computed by Equation (4) and Qk (y) refers to the probability of having y
1s prior to knot k. In order to evaluate Equation (5), we need to compute the values of Qk (y).
We observe that for every knot k and for every value of y, Qk (y) can be computed by dynamic
programming as:
Qk (y) =
y
X
Qk?1 (z)Pk?1 (y ? z|z).
(6)
z=0
Running time and speedups: If there is a single knot in every column, SimpleIS and IS are
identical. For a column j with ` knots, IS requires O(`2 c(j) + nc(j)2 ) ? or worst-case O(n3 ) time.
Thus, sequential implementation of IS has running time O(n3 m). This is the same as the time
required by Sequential for generating a single sample from Xhr,ci , providing a clear indication
of the computational speedups afforded by IS over sampling. Moreover, IS treats each column
independently, and thus it is parallelizable. Finally, since the entry-wise PDFs for two columns with
the same column marginals are identical, our method needs to only compute the PDFs of columns
with distinct column marginals. Further speedups can be achieved for large datasets by binning
columns with similar marginals into a bin with a representative column sum. When the columns in
the same bin differ by at most t, we call this speedup t-Binning.
Discussion: We point out here that IS is highly motivated by Sequential ? the most practical algorithm to date for sampling (almost) uniformly matrices from dataspace Xhr,ci . Although
Sequential was designed for a different purpose, IS uses some of its building blocks (e.g.,
knots). However, the connection is high level and there is no clear quantification of the relationship
between the values of P computed by IS and those produced by repeated sampling from Xhr,ci using
Sequential. While we study this relationship experimentally, we leave the formal investigation
as an open problem.
5
Experiments
b
Accuracy evaluation: We measure the accuracy of different methods by comparing the estimates P
they produce against the known ground-truth P and evaluating the average absolute error as:
P b
P(i,
j)
?
P(i,
j)
i,j
b
Error(P, P) =
(7)
mn
6
1
0.16
0.03
InformativeFix
RandomFix
0.14
0.4
Error(P, H)
0.6
0.1
0.08
0.06
0.02
0.015
0.01
0.04
0.2
0.005
0.02
0
0
InformativeFix
RandomFix
0.025
0.12
Error(P, H)
CDF
0.8
0.2
0.4
0.6
0.8
1
P(i, j) values
(a) D BLP distribution of entries
0
0
10
20
30
40
50
60
Percentage of Cells Revealed
(b) D BLP
70
0
0
10
20
30
40
50
60
70
Percentage of Cells Revealed
(c) N OLA
Figure 2: Panel (a) shows the CDF of estimated entry-wise PDFs by IS for the D BLP dataset.
Panels (b) and (c) show Error(P, H) as a function of the percentage of ?revealed? cells.
We compare the Error of our methods, i.e., SimpleIS and IS, to the Error of the two optimization methods: Rasch and LS described in Section 2.1 and the two explicit-sampling methods
Sequential and MCMC described in Section 3. For MCMC, we use double the burn-out period (i.e.,
four times the number of ones in the table) suggested by Gionis et al. [9]. For both Sequential
and MCMC, we vary the sample size and use up to 250 samples; for this number of samples these
methods can take up to 2 hours to complete for 100 ? 100 matrices. In fact, our experiments were
ultimately restricted by the inability of other methods to handle larger matrices.
Since exhaustive enumeration is not an option, it is very hard to obtain ground-truth values of P for
arbitrary matrices, so we focus on two specific types of matrices: blocked matrices and matrices
with knots.
An n ? n blocked matrix has marginals r = (1, n ? 1, . . . , n ? 1) and c = (1, n ? 1, . . . , n ? 1). Any
table with these marginals either has a value of 1 in entry (1, 1) or it has two distinct entries with
value 1 in the first row and the first column (excluding cell (1, 1)). Also note that given a realization
of the first row and the column, the rest of the table is fully determined. This implies that there are
exactly (2n ? 1) tables with such marginals and the entry-wise PDFs are: P(1, 1) = 1/(2n ? 1); for
i 6= 1, P(1, i) = P(i, 1) = (n ? 1)/(2n ? 1); and for i 6= 1 and j 6= 1, P(i, j) = 2n/(2n ? 1).
The matrices with knots are binary matrices that are generated by diagonally concatenating smaller
matrices for which the ground-truth is computed known through exhaustive enumeration. The concatenation is done in such a way such that no new switch boxes are introduced (as defined in Section 3). While the details of the construction are omitted due to lack of space, the key characteristic
of these matrices is that they have a large number of knots.
Figure 1(a) shows the Error (in log scale) of the different methods as a function of the matrix size n;
SimpleIS and IS perform identically in terms of Error and are much better than other methods.
Moreover, they become increasingly accurate as the size of the dataset increases, which means that
our methods remain relatively accurate even for large matrices. In this experiment, Rasch appears
to be the second-best method. However, as our next experiment indicates, the success of Rasch
hinges on the fact that the marginals of this experiment do not introduce many knots.
The results on matrices with many knots are shown in Figure 1(b). Here, the relative performance of
the different algorithms is different: SimpleIS is among the worst-performing algorithms, together
with LS and Rasch, with an average Error of 0.1. On the other hand, IS with Sequential and
MCMC are clearly the best-performing algorithms. This is mainly due to the fact that the matrices
we create for this experiment have a lot of knots, and as SimpleIS, LS and Rasch are all knotoblivious, they produce estimates with large errors. On the other hand, Sequential, MCMC and
IS take knots into account, and therefore they perform much better than the rest.
Looking at the running times (Figure 1(c)), we observe that the running time of our methods is
clearly better than the running time of all the other algorithms for larger values of n. For example,
while both SimpleIS and IS compute P[] within a second, Rasch requires a couple of seconds
and other methods need minutes or even up to hours to complete.
Utilizing entry-wise PDFs: Next, we move on to demonstrate the practical utility of entry-wise
PDFs. For this experiment we use the following real-world datasets as hidden matrices.
7
D BLP: The rows of this hidden matrix correspond to authors and the columns correspond to conferences in DBLP. Entry (i, j) has value 1 if author i has a publication in conference j. This subset of
DBLP, obtained by Hyv?onen et al. [12], has size 17, 702 ? 19 and density 8.3%.
N OLA: This hidden matrix records the membership of 15, 965 Facebook users from New Orleans
across 92 different groups [22]. The density of this 0?1 matrix is 1.1%1 .
We start with an experiment that addresses the following question: ?Can entry-wise PDFs help us
identify the values of the cells of the hidden matrix?? To quantify this, we first look at the distribution
of values of entry-wise PDFs per dataset, shown in Figure 2(a) for the D BLP dataset (the distribution
of entry-wise PDFs is similar for the N OLA dataset). The figure demonstrates that the overwhelming
majority of the P(i, j) entries are small, smaller than 0.1.
We then address the question: ?Can entry-wise PDFs guide us towards effectively querying the
hidden matrix H so that its entries are more accurately identified?? For this, we iteratively query
entries of H. At each iteration, we query 10% of unknown cells and we compute the entry-wise
PDFs P after having these entries fixed. Figures 2(b) and 2(c) show the Error(P, H) after each iteration for the D BLP and N OLA datasets; values of Error(P, H) close to 0 imply that our method could
reconstruct H almost exactly. The two lines in the plots correspond to R ANDOM F IX and I NFOR MATIVE F IX strategies for selecting the queries at every step. The former picks 10% of unknown
cells to query uniformly at random at every step. The latter selects 10% of cells with PDF values
closest to 0.5 at every step. The results demonstrate that I NFORMATIVE F IX is able to reconstruct
the table with significantly fewer queries than R ANDOM F IX. Interestingly, using I NFORMATIVE F IX
we can fully recover the hidden matrix of the N OLA dataset by just querying 30% of entries. Thus,
the values of entry-wise PDFs can be used to guide adaptive exploration of the hidden datasets.
Scalability: In a final experiment, we explored the accuracy/speedup tradeoff obtained by tBinning. For the the D BLP and N OLA datasets, we observed that by using t = 1, 2, 4 we reduced
the number of columns (and thus the running time) by a factor of at least 2, 3 and 4 respectively.
For the same datasets, we evaluate the accuracy of the t-Binning results by comparing the values
of Pt computed for t = {1, 2, 4} with the values of P0 (obtained by IS in the original dataset). In
all cases, and for all values of t, we observe that the Error(Pt , P0 ) (defined in Equation (7)) are low
? never exceeding 1.5%. Even the maximum entry-wise difference of P0 and Pt are consistently
about 0.1 ? note that such high error values only occur in one out of the millions of entries in P.
Finally, we also experimented with an even larger dataset obtained through the Yahoo! Research
Webscope program. This is a 140, 000 ? 4252 matrix of users and their participation in groups. For
this dataset we observe that for 80% reduction to the number of columns of the dataset introduces
an average error of size of only 1.7e?4 (for t = 4).
6
Conclusions
We started with a simple question: ?Given the row and column marginals of a hidden binary matrix
H, what can we infer about the matrix itself?? We demonstrated that existing optimization-based
approaches for addressing this question fail to provide a detailed intuition about the possible values
of particular cells of H. Then, we introduced the notion of entry-wise PDFs, which capture the
probability that a particular cell of H is equal to 1. From the technical point of view, we developed
IS, a parallelizable algorithm that efficiently and accurately approximates the values of the entrywise PDFs for all cells simultaneously. The key characteristic of IS is that it computes the entrywise PDFs without generating any of the matrices in the dataspace defined by the input row and
column marginals, and did so by implicitly sampling from the dataspace. Our experiments with
synthetic and real data demonstrated the accuracy of IS on computing entry-wise PDFs as well as
the practical utility of these PDFs towards better understanding of the hidden matrix.
Acknowledgements
This research was partially supported by NSF grants CNS-1017529, III-1218437 and a gift from
Microsoft.
1
The dataset is available at: http://socialnetworks.mpi-sws.org/data-wosn2009.html
8
References
[1] A. Bekessy, P. Bekessy, and J. Komplos. Asymptotic enumeration of regular matrices. Studia Scientiarum
Mathematicarum Hungarica, pages 343?353, 1972.
[2] I. Bez?akov?a, N. Bhatnagar, and E. Vigoda. Sampling binary contingency tables with a greedy start.
Random Struct. Algorithms, 30(1-2):168?205, 2007.
[3] T. D. Bie, K.-N. Kontonasios, and E. Spyropoulou. A framework for mining interesting pattern sets.
SIGKDD Explorations, 12(2):92?100, 2010.
[4] T. Bond and C. Fox. Applying the Rasch Model: Fundamental Measurement in the Human Sciences.
Lawrence Erlbaum, 2007.
[5] R. Brualdi and H. J. Ryser. Combinatorial Matrix Theory. Cambridge University Press, 1991.
[6] Y. Chen, P. Diaconis, S. Holmes, and J. Liu. Sequential monte carlo methods for statistical analysis of
tables. Journal of American Statistical Association, JASA, 100:109?120, 2005.
[7] G. W. Cobb and Y.-P. Chen. An application of Markov chain Monte Carlo to community ecology. Amer.
Math. Month., 110(4):264?288, 2003.
[8] M. Gail and N. Mantel. Counting the number of r ? c contigency tables with fixed margins. Journal of
the American Statistical Association, 72(360):859?862, 1977.
[9] A. Gionis, H. Mannila, T. Mielik?ainen, and P. Tsaparas. Assessing data mining results via swap randomization. TKDD, 1(3), 2007.
[10] I. J. Good and J. Crook. The enumeration of arrays and a generalization related to contigency tables.
Discrete Mathematics, 19(1):23 ? 45, 1977.
[11] S. Hakimi. On realizability of a set of integers as degrees of the vertices of a linear graph. Journal of the
Society for Industrial and Applied Mathematics, 10(3):496?506, 1962.
[12] S. Hyv?onen, P. Miettinen, and E. Terzi. Interpretable nonnegative matrix decompositions. In KDD, pages
345?353, 2008.
[13] T. Kariya and H. Kurata. Generalized Least Squares. Wiley, 2004.
[14] N. Kashtan, S. Itzkovitz, R. Milo, and U. Alon. Efficient sampling algorithm for estimating subgraph
concentrations and detecting network motifs. Bioinformatics, 20(11):1746?1758, 2004.
[15] M. Mampaey, J. Vreeken, and N. Tatti. Summarizing data succinctly with the most informative itemsets.
TKDD, 6(4), 2012.
[16] H. Mannila and E. Terzi. Nestedness and segmented nestedness. In KDD, pages 480?489, 2007.
[17] R. Milo, S. Shen-Orr, S. Itzkovirz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: Simple
building blocks of complex networks. Science, 298(5594):824?827, 2002.
[18] P. Erd?os and T. Gallai. Graphs with prescribed degrees of vertices. Mat. Lapok., 1960.
[19] G. Rasch. Probabilistic models for some intelligence and attainment tests. 1960. Technical Report,
Danish Institute for Educational Research, Copenhagen.
[20] J. Sanderson. Testing ecological patterns. Amer. Sci., 88(4):332?339, 2000.
[21] T. Snijders. Enumeration and simulation methods for 0?1 matrices with given marginals. Psychometrika,
56(3):397?417, 1991.
[22] B. Viswanath, A. Mislove, M. Cha, and K. P. Gummadi. On the evolution of user interaction in Facebook.
In WOSN, pages 37?42, 2009.
[23] B. Wang and F. Zhang. On the precise number of (0,1)-matrices in u(r,s). Discrete Mathematics,
187(13):211?220, 1998.
[24] M. Yannakakis. Computing the minimum fill-in is NP-Complete. SIAM Journal on Algebraic and Discrete Methods, 2(1):77?79, 1981.
9
| 5118 |@word version:1 polynomial:6 norm:1 open:1 cha:1 hyv:2 simulation:1 decomposition:1 p0:3 q1:1 pick:1 asks:1 recursively:1 reduction:1 liu:1 cobb:1 efficacy:2 selecting:1 interestingly:2 existing:4 recovered:1 comparing:2 yet:1 assigning:1 must:3 bie:1 john:1 informative:1 kdd:2 designed:1 plot:1 depict:1 ainen:1 interpretable:1 generative:6 fewer:1 greedy:1 intelligence:1 ith:2 short:1 record:1 detecting:1 math:1 org:1 zhang:1 five:1 warmup:1 become:1 consists:1 owner:1 inside:2 introduce:4 notably:1 nor:1 little:2 enumeration:5 overwhelming:1 considering:1 gift:1 provided:1 estimating:4 underlying:3 moreover:5 formalizing:1 notation:1 panel:4 psychometrika:1 what:5 minimizes:1 developed:1 impractical:1 guarantee:1 every:11 tackle:1 exactly:5 demonstrates:3 uk:3 grant:1 appear:1 aggregating:2 treat:1 despite:1 vigoda:1 burn:1 itemset:1 studied:1 specifying:1 sists:1 bi:1 ola:6 commerce:1 practical:4 testing:2 orleans:1 block:2 terzi:3 mannila:2 significantly:1 revealing:2 confidence:4 induce:1 refers:1 regular:1 cannot:2 close:3 applying:1 imposed:2 customer:1 demonstrated:2 maximizing:1 nestedness:2 straightforward:1 educational:1 starting:1 l:9 independently:1 focused:1 shen:1 amazon:1 identifying:2 insight:4 adjusts:1 utilizing:1 holmes:1 array:1 fill:1 handle:1 notion:3 target:1 construction:1 pt:3 user:6 exact:1 programming:1 us:2 associate:1 element:1 satisfying:1 viswanath:1 binning:3 observed:4 itemsets:1 wang:1 verifying:1 worst:2 capture:1 kashtan:2 valuable:1 intuition:2 dynamic:1 ryser:3 ultimately:2 uniformity:1 solving:1 upon:1 bipartite:2 efficiency:2 completely:1 swap:1 represented:2 distinct:2 mn2:1 describe:1 monte:3 query:5 marketplace:1 aggregate:2 exhaustive:2 larger:3 solve:1 plausible:1 reconstruct:2 statistic:2 itself:3 final:4 online:1 advantage:1 indication:1 behzad:2 propose:6 interaction:1 product:2 realization:9 date:1 holistic:1 translate:1 subgraph:1 achieve:2 frobenius:1 scalability:1 empty:1 double:1 assessing:1 produce:4 generating:3 comparative:1 leave:1 help:1 develop:1 alon:2 c:3 implies:1 quantify:1 rasch:13 differ:1 drawback:3 exploration:6 human:1 viewing:1 bin:2 generalization:1 investigation:1 randomization:1 mathematicarum:1 adjusted:1 recap:1 ground:3 lawrence:1 algorithmic:1 vary:2 adopt:1 omitted:1 purpose:1 estimation:2 bond:1 combinatorial:1 sensitive:4 create:1 gail:1 clearly:5 aim:1 rather:3 publication:1 derived:1 focus:4 pdfs:28 consistently:1 likelihood:5 indicates:1 mainly:1 industrial:1 sigkdd:1 summarizing:1 inference:1 motif:2 membership:3 inaccurate:1 typically:1 hidden:20 selects:1 interested:1 among:1 html:1 denoted:2 yahoo:1 art:1 marginal:2 equal:3 never:2 having:2 sampling:21 identical:2 represents:1 brualdi:1 look:1 yannakakis:1 purchase:3 report:1 np:1 primarily:1 diaconis:1 simultaneously:2 itzkovitz:1 cns:1 microsoft:1 ecology:1 interest:1 investigate:1 highly:2 mining:2 evaluation:4 adjust:7 weakness:1 violation:1 introduces:1 chain:2 accurate:2 edge:1 tuple:1 necessary:1 fox:1 guidance:1 vreeken:1 column:62 addressing:2 entry:52 subset:1 vertex:2 uniform:4 erlbaum:1 answer:1 synthetic:2 referring:1 density:3 fundamental:1 siam:1 bu:3 probabilistic:1 together:2 concrete:1 squared:1 reflect:1 again:1 gale:2 creating:1 ek:2 inefficient:1 american:2 account:2 orr:1 sec:1 wk:3 gionis:3 satisfy:1 explicitly:1 depends:2 view:5 lot:1 netflix:1 recover:2 option:2 start:4 complicated:1 minimize:1 square:3 accuracy:6 qk:8 characteristic:2 likewise:1 efficiently:3 correspond:3 identify:1 raw:4 accurately:2 produced:2 knot:32 carlo:3 bhatnagar:1 evimaria:2 history:1 parallelizable:2 facebook:2 definition:3 danish:1 evaluates:1 against:1 frequency:1 associated:1 couple:1 dataset:14 studia:1 knowledge:2 dimensionality:1 andom:2 routine:1 back:1 appears:1 follow:1 erd:2 entrywise:3 formulation:2 evaluated:1 box:2 done:1 amer:2 just:2 implicit:1 hand:3 o:2 lack:1 scientiarum:1 reveal:1 building:3 validity:1 remedy:1 former:1 regularization:1 evolution:1 iteratively:1 uniquely:1 byers:2 mpi:1 generalized:1 pdf:7 presenting:1 complete:3 demonstrate:2 ranging:1 wise:26 consideration:2 exponentially:1 million:1 association:2 approximates:2 marginals:33 refer:6 blocked:3 measurement:1 cambridge:1 mathematics:3 similarly:1 access:1 mainstream:1 closest:1 dynamicprogramming:1 optimizing:1 store:3 certain:1 ecological:1 binary:21 success:1 accomplished:1 exploited:1 minimum:1 additional:1 impose:1 aggregated:1 converge:1 period:1 signal:1 u0:2 multiple:1 snijders:1 infer:3 segmented:1 technical:2 characterized:1 gummadi:1 prediction:1 regression:2 basic:2 iteration:2 normalization:1 achieved:1 cell:19 whereas:1 chklovskii:1 interval:2 rest:8 webscope:1 induced:1 blp:7 integer:4 call:3 structural:1 near:2 counting:2 revealed:3 iii:1 identically:1 switch:2 identified:2 tradeoff:1 whether:2 six:1 motivated:1 nfor:1 utility:2 algebraic:1 proprietary:2 matlab:1 clear:2 detailed:1 extensively:1 reduced:1 generate:2 http:1 exist:2 percentage:3 nsf:1 estimated:1 popularity:2 per:2 write:1 discrete:3 milo:2 mat:1 group:5 putting:1 four:2 key:2 neither:1 vast:1 graph:4 fraction:2 sum:5 run:1 furnished:1 reporting:2 throughout:2 reasonable:1 decide:2 almost:2 bound:2 guaranteed:1 nonnegative:1 occur:1 constraint:7 your:1 n3:3 afforded:1 encodes:2 generates:1 min:1 prescribed:1 performing:2 px:10 relatively:1 speedup:5 across:2 smaller:2 increasingly:1 remain:1 mielik:1 tatti:1 restricted:2 pr:6 walmart:1 taken:1 equation:8 discus:2 turn:1 fail:2 know:2 dataspace:19 end:1 sanderson:1 available:1 observe:5 struct:1 kurata:1 original:1 assumes:1 running:12 include:1 hinge:1 sw:1 ting:1 build:1 approximating:1 society:1 objective:3 move:1 question:10 quantity:1 occurs:1 strategy:1 concentration:1 kth:1 miettinen:1 concatenation:1 majority:1 sci:1 participate:1 considers:2 reason:1 assuming:1 analyst:2 index:1 relationship:2 providing:1 nc:1 ql:1 unfortunately:1 onen:2 implementation:1 unknown:2 perform:2 upper:1 observation:1 datasets:15 markov:2 situation:1 defining:1 excluding:1 looking:1 precise:1 arbitrary:2 community:1 introduced:3 bk:2 pair:2 required:3 subvector:2 copenhagen:1 connection:1 hour:2 address:3 beyond:1 suggested:1 able:1 usually:1 pattern:3 program:1 max:1 natural:3 rely:1 quantification:1 participation:1 hr:11 recursion:1 mn:1 imply:1 numerous:1 identifies:1 lk:3 realizability:2 started:1 naive:1 hungarica:1 prior:6 review:1 understanding:1 acknowledgement:1 asymptotic:2 relative:2 lacking:1 fully:2 interesting:2 proportional:1 querying:2 contingency:1 degree:3 jasa:1 consistent:1 row:28 succinctly:1 diagonally:1 placed:4 supported:1 jth:4 formal:1 guide:2 institute:1 taking:2 tsaparas:1 absolute:1 overcome:1 xn:1 world:2 valid:1 evaluating:2 computes:4 ending:1 author:2 made:1 adaptive:1 mantel:1 simplified:1 social:2 implicitly:4 overcomes:1 mislove:1 ml:3 sequentially:1 instantiation:2 alternatively:1 search:1 sk:2 why:1 table:12 attainment:1 ignoring:1 obtaining:1 complex:1 constructing:1 domain:2 tkdd:2 did:1 significance:1 pk:8 main:1 n2:1 repeated:1 x1:1 site:1 representative:1 depicts:1 wiley:1 position:2 explicit:3 exceeding:1 exponential:1 concatenating:1 third:1 ix:5 minute:1 specific:2 explored:1 experimented:1 albeit:1 sequential:16 effectively:1 importance:2 ci:21 conditioned:1 occurring:1 margin:1 chen:5 dblp:2 boston:3 entropy:1 simply:1 explore:1 crook:1 adjustment:1 partially:1 applies:1 nested:2 truth:3 determines:1 cdf:2 conditional:1 viewed:2 goal:1 month:1 towards:2 absence:1 feasible:1 considerable:1 hard:2 experimentally:1 specifically:1 determined:2 uniformly:3 lemma:1 total:1 called:2 experimental:3 formally:2 latter:2 inability:1 bioinformatics:1 incorporate:1 evaluate:2 mcmc:11 ex:1 |
4,552 | 5,119 | Error-Minimizing Estimates and Universal
Entry-Wise Error Bounds for Low-Rank Matrix
Completion
Franz J. Kir?aly?
Department of Statistical Science and
Centre for Inverse Problems
University College London
[email protected]
Louis Theran?
Institute of Mathematics
Discrete Geometry Group
Freie Universit?at Berlin
[email protected]
Abstract
We propose a general framework for reconstructing and denoising single entries
of incomplete and noisy entries. We describe: effective algorithms for deciding
if and entry can be reconstructed and, if so, for reconstructing and denoising it;
and a priori bounds on the error of each entry, individually. In the noiseless case
our algorithm is exact. For rank-one matrices, the new algorithm is fast, admits
a highly-parallel implementation, and produces an error minimizing estimate that
is qualitatively close to our theoretical and the state-of-the-art Nuclear Norm and
OptSpace methods.
1
Introduction
Matrix Completion is the task to reconstruct low-rank matrices from a subset of its entries and
occurs naturally in many practically relevant problems, such as missing feature imputation, multitask learning [2], transductive learning [4], or collaborative filtering and link prediction [1, 8, 9].
Almost all known methods performing matrix completion are optimization methods such as the
max-norm and nuclear norm heuristics [3, 9, 10], or OptSpace [5], to name a few amongst many.
These methods have in common that in general: (a) they reconstruct the whole matrix; (b) error
bounds are given for all of the matrix, not single entries; (c) theoretical guarantees are given based
on the sampling distribution of the observations. These properties are all problematic in scenarios
where: (i) one is interested only in predicting or imputing a specific set of entries; (ii) the entire data
set is unwieldy to work with; (iii) or there are non-random ?holes? in the observations. All of these
possibilities are very natural for the typical ?big data? setup.
The recent results of [6] suggest that a method capable of handling challenges (i)?(iii) is within
reach. By analyzing the algebraic-combinatorial structure Matrix Completion, the authors provide
algorithms that identify, for any fixed set of observations, exactly the entries that can be, in principle,
reconstructed from them. Moreover, the theory developed indicates that, when a missing entry can
be determined, it can be found by first exposing combinatorially-determined polynomial relations
between the known entries and the unknown ones and then selecting a common solution.
To bridge the gap between the theory of [6] and practice are the following challenges: to efficiently
find the relevant polynomial relations; and to extend the methodology to the noisy case. In this
paper, we show how to do both of these things in the case of rank one, and discuss how to instantiate
the same scheme for general rank. It will turn out that finding the right set of polynomials and
?
Supported by the Mathematisches Forschungsinstitut Oberwolfach
Supported by the European Research Council under the European Union?s Seventh Framework Programme
(FP7/2007-2013) / ERC grant agreement no 247029-SDModels.
?
1
noisy estimation are intimately related: we can treat each polynomial as providing an estimate of
the missing entry, and we can then take as our estimate the variance minimizing weighted average.
This technique also gives a priori lower bounds for a broad class of unbiased single-entry estimators
in terms of the combinatorial structure of the observations and the noise model only. In detail, our
contributions include:
? the construction of a variance-minimal and unbiased estimator for any fixed missing entry
of a rank-one-matrix, under the assumption of known noise variances
? an explicit form for the variance of that estimator, being a lower bound for the variance of
any unbiased estimation of any fixed missing entry and thus yielding a quantiative measure
on the trustability of that entry reconstructed from any algorithm
? the description of a strategy to generalize the above to any rank
? comparison of the estimator with two state-of-the-art optimization algorithms (OptSpace
and nuclear norm), and error assessment of the three matrix completion methods with the
variance bound
As mentioned, the restriction to rank one is not inherent in the overall scheme. We depend on rank
one only in the sense that we understand the combinatorial-algebraic structure of rank-one-matrix
completion exactly, whereas the behavior in higher rank is not yet as well understood. Nonetheless,
it is, in principle accessible, and, once available will can be ?plugged in? to the results here without
changing the complexity much. In this sense, the present paper is a proof-of-concept for a new
approach to estimating and denoising in algebraic settings based on combinatorially enumerating a
set of polynomial estimators and then averaging them. For us, computational efficiency comes via
a connection to the topology of graphs that is specific to this problem, but we suspect that this part,
too, can be generalized somewhat.
2
The Algebraic Combinatorics of Matrix Completion
We first briefly review facts about Matrix Completion that we require. The exposition is along the
lines of [6].
Definition 2.1. A matrix M 2 {0, 1}m?n is called a mask. If A is a partially known matrix, then
the mask of A is the mask which has ones in exactly the positions which are known in A and zeros
otherwise.
Definition 2.2. Let M be an (m?n) mask. We will call the unique bipartite graph G(M ) which has
M as bipartite adjacency matrix the completion graph of M . We will refer to the m vertices of G(M )
corresponding to the rows of M as blue vertices, and to the n vertices of G(M ) corresponding to
the columns as red vertices. If e = (i, j) is an edge in Km,n (where Km,n is the complete bipartite
graph with m blue and n red vertices), we will also write Ae instead of Aij and for any (m ? n)
matrix A.
A fundamental result, [6, Theorem 2.3.5], says that identifiability and reconstructability are, up to a
null set, graph properties.
Theorem 2.3. Let A be a generic1 and partially known (m ? n) matrix of rank r, let M be the mask
of A, let i, j be integers. Whether Aij is reconstructible (uniquely, or up to finite choice) depends
only on M and the true rank r; in particular, it does not depend on the true A.
For rank one, as opposed to higher rank, the set of reconstructible entries is easily obtainable from
G(M ) by combinatorial means:
Theorem 2.4 ([6, Theorem 2.5.36 (i)]). Let G ? Km,n be the completion graph of a partially
known (m ? n) matrix A. Then the set of uniquely reconstructible entries of A is exactly the set
Ae , with e in the transitive closure of G. In particular, all of A is reconstructible if and only if G is
connected.
1
In particular, if A is sampled from a continuous density, then the set of non-generic A is a null set.
2
2.1
Reconstruction on the transitive closure
We extend Theorem 2.4?s theoretical reconstruction guarantee by describing an explicit, algebraic
algorithm for actually doing the reconstruction.
Definition 2.5. Let P ? Km,n (or, C ? Km,n ) be a path (or, cycle), with a fixed start and end. We
will denote by E + (P ) be the set of edges in P (resp. E + (C) and C) traversed from blue vertex to
a red one, and by E (P ) the set of edges traversed from a red vertex to a blue one 2 . From now
on, when we speak of ?oriented paths? or ?oriented cycles?, we mean with this sign convention and
some fixed traversal order.
Let A = Aij be a (m ? n) matrix of rank 1, and identify the entries Aij with the edges of Km,n .
For an oriented cycle C, we define the polynomials
Y
Y
PC (A) =
Ae
Ae , and
e2E + (C)
LC (A) =
X
e2E (C)
log Ae
e2E + (C)
X
log Ae ,
e2E (C)
where for negative entries of A, we fix a branch of the complex logarithm.
Theorem 2.6. Let A = Aij be a generic (m ? n) matrix of rank 1. Let C ? Km,n be an oriented
cycle. Then, PC (A) = LC (A) = 0.
Proof: The determinantal ideal of rank one is a binomial ideal generated by the (2 ? 2) minors of A
(where entries of A are considered as variables). The minor equations are exactly PC (A), where C
is an elementary oriented four-cycle; if C is an elementary 4-cycle, denote its edges by a(C), b(C),
c(C), d(C), with E + (C) = {a(C), d(C)}. Let C be the collection of the elementary 4-cycles, and
define LC (A) = {LC (A) : C 2 C} and PC (A) = {PC (A) : C 2 C}.
By sending the term log Ae to a formal variable xe , we see that the free Z-group generated by the
LC (A) is isomorphic to H1 (Km,n , Z). With this equivalence, it is straightforward that, for any
oriented cycle D, LD (A) lies in the Z-span of elements of LC (A) and, therefore, formally,
X
LD (A) =
?C ? LC (A)
C2C
with the ?C 2 Z. Thus LD (?) vanishes when A is rank one, since the r.h.s. does. Exponentiating
completes the proof.
?
Corollary 2.7. Let A = Aij be a (m ? n) matrix of rank 1. Let v, w be two vertices in Km,n . Let
P, Q be two oriented paths in Km,n starting at v and ending at w. Then, for all A, it holds that
LP (A) = LQ (A).
3
A Combinatorial Algebraic Estimate for Missing Entries and Their Error
We now construct our estimator.
3.1
The sampling model
In all of the following, we will assume that the observations arise from the following sampling
process:
Assumption 3.1. There is an unknown fixed, rank one, matrix A which is generic, and an (m ? n)
mask M 2 {0, 1}m?n which is known. There is a (stochastic) noise matrix E 2 Rm?n whose entries
are uncorrelated and which is multiplicatively centered with finite variance, non-zero3 variance; i.e.,
E(log Eij ) = 0 and 0 < Var(log Eij ) < 1 for all i and j.
The observed data is the matrix A M E = ?(A E), where denotes the Hadamard (i.e.,
component-wise) product. That is, the observation is a matrix with entries Aij ? Mij ? Eij .
2
3
Any fixed orientation of Km,n will give us the same result.
The zero-variance case corresponds to exact reconstruction, which is handled already by Theorem 2.4.
3
The assumption of multiplicative noise is a necessary precaution in order for the presented estimator
(and in fact, any estimator) for the missing entries to have bounded variance, as shown in Example 3.2 below. This is not, in practice, a restriction since an infinitesimal additive error Aij on an
entry of A is equivalent to an infinitesimal multiplicative error log Aij = Aij /Aij , and additive
variances can be directly translated into multiplicative variances if the density function for the noise
is known4 . The previous observation implies that the multiplicative noise model is as powerful as
any additive one that allows bounded variance estimates.
Example 3.2. Consider a (2 ? 2)-matrix A of rank 1. The unique equation between the entries
is then A11 A22 = A12 A21 . Solving for any entry will have another entry in the denominator, for
A21
example A11 = A12
A22 . Thus we get an estimator for A11 when substituting observed and noisy
entries for A12 , A21 , A22 . When A22 approaches zero, the estimation error for A11 approaches
infinity. In particular, if the density function of the error E22 of A22 is too dense around the value
A22 , then the estimate for A11 given by the equation will have unbounded variance. In such a
case, one can show that no estimator for A11 has bounded variance.
3.2
Estimating entries and error bounds
In this section, we construct the unbiased estimator for the entries of a rank-one-matrix with minimal
variance. First, we define some notation to ease the exposition:
Notations 3.3. We will denote by aij = log Aij and "ij = log Eij the logarithmic entries and noise.
Thus, for some path P in Km,n we obtain
X
X
LP (A) =
ae
ae .
e2E + (P )
e2E (P )
Denote by bij = aij + "ij the logarithmic (observed) entries, and B the (incomplete) matrix which
has the (observed) bij as entries. Denote by ij = Var(bij ) = Var("ij ).
The components of the estimator will be built from the LP :
Lemma 3.4. Let G = G(M ) be the graph of the mask M . Let x = (v, w) 2 Km,n be any edge
with v red. Let P be an oriented path in G(M ) starting at v and ending at w. Then,
X
X
LP (B) =
be
be
e2E + (P )
e2E (P )
is an unbiased estimator for ax with variance Var(LP (B)) =
P
e2P
e.
Proof: By linearity of expectation and centeredness of "ij , it follows that
X
X
E(LP (B)) =
E(be )
E(be ),
e2E + (P )
e2E (P )
thus LP (B) is unbiased. Since the "e are uncorrelated, the be also are; thus, by Bienaym?e?s formula,
we obtain
X
X
Var(LP (B)) =
Var(be ) +
Var(be ),
e2E + (P )
and the statement follows from the definition of
e2E (P )
e.
In the following, we will consider the following parametric estimator as a candidate for estimating
ae :
Notations 3.5. Fix an edge x = (v, w) 2PKm,n . Let P be a basis for the v?w path space and
denote #P by p. For ? 2 Rp , set X(?) = P 2P ?P LP (B).
Furthermore, we will denote by
the n-vector of ones.
4
The multiplicative noise assumption causes the observed entries and the true entries to have the same sign.
The change of sign can be modeled by adding another multiplicative binary random variable in the model which
takes values ?1; this adds an independent combinatorial problem for the estimation of the sign which can be
done by maximum likelihood. In order to keep the exposition short and easy, we did not include this into the
exposition.
4
The following Lemma follows immediately from Lemma 3.4 and Theorem 2.6:
Lemma 3.6. E(X(?)) =
if > ? = 1.
>
? ? bx ; in particular, X(?) is an unbiased estimator for bx if and only
We will now show that minimizing the variance of X(?) can be formulated as a quadratic program
with coefficients entirely determined by ax , the measurements be and the graph G(M ). In particular,
we will expose an explicit formula for the ? minimizing the variance. The formula will make use of
the following path kernel. For fixed vertices s and t, an s?t path is the sum of a cycle H1 (G, Z) and
ast . The s?t path space is the linear span of all the s?t paths. We discuss its relevant properties in
Appendix A.
Definition 3.7. Let e 2 Km,n be an edge. For an edge e and a path P , set ce,P = ?1 if e 2 E ? (P )
otherwise ce,P = 0. Let P, Q 2 P be any fixed oriented paths. Define the (weighted) path kernel
k : P ? P ! R by
X
k(P, Q) =
ce,P ? ce,Q ? e .
e2Km,n
Under our assumption that Var(be ) > 0 for all e 2 Km,n , the path kernel is positive definite, since
it is a sum of p independent positive semi-definite functions; in particular, its kernel matrix has full
rank. Here is the variance-minimizing unbiased estimator:
Proposition 3.8. Let x = (s, t) be a pair of vertices, and P a basis for the s?t path space in G with
p elements. Let ? be the p ? p kernel matrix of the path kernel with respect to the basis P. For any
? 2 Rp , it holds that Var(X(?)) = ?> ??. Moreover, under the condition > ? = 1, the variance
1
>
Var(X(?)) is minimized by ? = ? 1
? 1
.
Proof: By inserting definitions, we obtain
X
X
X(?) =
?P LP (B) =
?P
P 2P
P 2P
X
ce,P be .
e2Km,n
Writing b = (be ) 2 Rmn as vectors, and C = (ce,P ) 2 Rp?mn as matrices, we obtain X(?) =
b> C?. By using that Var( ?) = 2 Var(?) for any scalar , and independence of the be , a calculation
yields Var(X(?)) = ?> ??. In order to determine the minimum of the variance in ?, consider the
Lagrangian
!
X
L(?, ) = ?> ?? +
1
?P ,
where the slack term models the condition
>
P 2P
? = 1. An straightforward computation yields
@L
= 2??
@?
Due to positive definiteness of ? the function Var(X(?)) is convex, thus ? = ?
be the unique ? minimizing the variance while satisfying > ? = 1.
1
/
>
?
1
will
?
Remark 3.9. The above setup works in wider generality: (i) if Var(be ) = 0 is allowed and there is
an s?t path of all zero variance edges, the path kernel becomes positive semi-definite; (ii) similarly
if P is replaced with any set of paths at all, the same may occur. In both cases, we may replace
? 1 with the Moore-Penrose pseudo-inverse and the proposition still holds: (i) reduces to the exact
reconstruction case of Theorem 2.4; (ii) produces the optimal estimator with respect to P, which is
optimal provided that P is spanning, and adding paths to P does not make the estimate worse.
Our estimator is optimal over a fairly large class.
bij be any estimator for an entry Aij of the true matrix that is: (i) unbiased; (ii)
Theorem 3.10. Let A
a deterministic piecewise smooth function of the observations; (iii) independent of the noise model.
bij ).
Let A?ij be the estimator from Proposition 3.8. Then Var(A?ij ) ? Var(A
We give a complete proof in the full version. Here, we prove the special case of log-normal noise,
which gives an alternate viewpoint on the path kernel.
5
Proof: As above, we work with the formal logarithm aij of Aij . For log-normal noise, the "e are
independently distributed normals with variance e . It then follows that, for any P in the i?j path
space,
!
X
LP (B) ? N aij ,
e
e2P
and the kernel matrix ? of the path kernel is the covariance matrix for the LP in our path basis.
Thus, the LP have distribution N (aij , ?). It is well-known that any multivariate normal has a
linear repreameterization so that the coordinates are independent; a computation shows that, here,
1
>
? 1
? 1
is the correct linear map. Thus, the estimator A?ij is the sample mean of the
coordinates in the new parameterization. Since this is a sufficient statistic, we are done via the
Lehmann?Scheff?e Theorem.
?
3.3
Rank 2 and higher
An estimator for rank 2 and higher, together with a variance analysis, can be constructed similarly
once all the solving polynomials are known. The main difficulties lies in the fact that these polynomials are not parameterized by cycles anymore, but specific subgraphs of G(M ), see [6, Section 2.5]
and that they are not necessarily linear in the missing entry Ae . However, even with approximate
oracles for evaluating these polynomials and estimating their covariances, an estimator similar to
X(?) can be constructed and analyzed; in particular, we still need only to consider a basis for the
space of ?circuits? through the missing entry and not a costly brute force enumeration.
3.4
The algorithms
We now give the algorithms for estimating/denoising entries and computing the variance bounds;
an implementation is available from [7]. Since the the path matrix C, the path kernel matrix ?,
and the optimal ? are required for both, we show how to compute them first. We can find a basis
Algorithm 1 Calculates path kernel ? and ?.
Input: index (i, j), an (m ? n) mask M , variances .
Output: path matrix C, path kernel ? and minimizer ?.
1: Find a linearly independent set of paths P in the graph G(M ), starting from i and ending at j.
2: Determine the matrix C = (ce,P ) with e 2 G(M ), P 2 P; set ce,P = ?1 if e 2 E ? (P ),
otherwise ce,P = 0.
3: Define a diagonal matrix S = diag( ), with See = e for e 2 G(M ).
4: Compute the kernel matrix ? = C > SC.
1
>
5: Calculate ? = ? 1
? 1
.
6: Output C, ? and ?.
for the path space in linear time. To keep the notation manageable, we will conflate formal sums
of the xe , cycles in H1 (G, Z) and their representations as vectors in Rm . Correctness is proven in
Appendix A.
Algorithm 2 Calculates a basis P of the path space.
Input: index (i, j), an (m ? n) mask M .
Output: a basis P for the space of oriented i?j paths.
1: If (i, j) is not an edge of M , and i and j are in different connected components, then P is empty.
Output ;.
2: Otherwise, if (i, j) is not an edge, of M , add a ?dummy? copy.
3: Compute a spanning forest F of M that does not contain (i, j), if possible.
4: For each edge e 2 M \ F , compute the fundamental cycle Ce of e in F .
5: If (i, j) is an edge in M , output { x(i,j) } [ {Ce x(i,j) : e 2 M \ F }.
6: Otherwise, let P(i,j) = C(i,j) x(i,j) . Output {Ce P(i,j) : e 2 M \ (F [ {(i, j)})}.
6
Algorithms 3 and 4 then can make use of the calculated C, ?, ? to determine an estimate for any
entry Aij and its minimum variance bound. The algorithms follow the exposition in Section 3.2,
from where correctness follows; Algorithm 3 additionally provides treatment for the sign of the
entries.
Algorithm 3 Estimates the entry aij .
Input: index (i, j), an (m ? n) mask M , log-variances , the partially observed and noisy matrix
B.
Output: The variance-minimizing estimate for Aij .
1: Calculate C and ? with Algorithm 1.
2: Store B as a vector b = (log |Be |) and a sign vector s = (sgn Be ) with e 2 G(M ).
bij = ? exp b> C? . The sign is + if each column of s> |C| (|.| component-wise)
3: Calculate A
contains an odd number of entries 1, else .
bij .
4: Return A
Algorithm 4 Determines the variance of the entry log(Aij ).
Input: index (i, j), an (m ? n) mask M , log-variances .
Output: The variance lower bound for log(Aij ).
1: Calculate ? and ? with Algorithm 1.
2: Return ?> ??.
Algorithm 4 can be used to obtain the variance bound independently of the observations. The
variance bound is relative, due to its multiplicativity, and can be used to approximate absolute
bounds when any (in particular not necessarily the one from Algorithm 3) reconstruction estimate
bij is available. Namely, if bij is the estimated variance of the logarithm, we obtain an upper
A
p
bij ? exp
bij , and a lower confidence/deviation bound
confidence/deviation bound A
bij for A
p
p
b
bij ? bij . Also note that if Aij
Aij ? exp
bij , corresponding to the log-confidence log A
is not reconstructible from the mask M , then the deviation bounds will be infinite.
4
4.1
Experiments
Universal error estimates
For three different masks, we calculated the predicted minimum variance for each entry of the mask.
The mask sizes are all 140 ? 140. The multiplicative noise was assumed to be e = 1 for each entry.
Figure 1 shows the predicted a-priori minimum variances for each of the masks. The structure of
the mask affects the expected error. Known entries generally have least variance, and it is less than
the initial variance of 1, which implies that the (independent) estimates coming from other paths can
be used to successfully denoise observed data. For unknown entries, the structure of the mask is
mirrored in the pattern of the predicted errors; a diffuse mask gives a similar error on each missing
entry, while the more structured masks have structured error which is determined by combinatorial
properties of the completion graph.
1.1
70
1
3
0.9
2.5
0.8
60
50
0.7
2
0.6
0.5
40
30
1.5
0.4
20
0.3
1
0.2
10
0.1
Figure 1: The figure shows three pairs of masks and predicted variances. A pair consists of two
adjacent squares. The left half is the mask which is depicted by red/blue heatmap with red entries
known and blue unknown. The right half is a multicolor heatmap with color scale, showing the predicted variance of the completion. Variances were calculated by our implementation of Algorithm 4.
7
45
40
6
Path Kernel
NN
OptSpace
5
PathKernel
NN
OptSpace
quantile mean observed error
35
30
MSE
25
20
15
4
3
2
10
1
5
0
0
.1
.2
.3
.4
.5
Noise Level
.6
.7
.8
0
?0.2
.9
(a) mean squared errors
0
0.2
0.4
0.6
predicted variance
0.8
1
1.2
(b) error vs. predicted variance
Figure 2: For 10 randomly chosen masks and 50 ? 50 true matrix, matrix completions were performed with Nuclear Norm (green), OptSpace (red), and Algorithm 3 (blue) under multiplicative
noise with variance increasing in increments of 0.1. For each completed entry, minimum variances
were predicted by Algorithm 4. 2(a) shows the mean squared error of the three algorithms for each
noise level, coded by the algorithms? respective colors. 2(b) shows a bin-plot of errors (y-axis) versus predicted variances (x-axis) for each of the three algorithms: for each completed entry, a pair
(predicted error, true error) was calculated, predicted error being the predicted variance, and the
2
actual prediction error being the squared logarithmic error (i.e., (log |atrue | log |apredicted |) for
an entry a). Then, the points were binned into 11 bins with equal numbers of points. The figure
shows the mean of the errors (second coordinate) of the value pairs with predicted variance (first
coordinate) in each of the bins, the color corresponds to the particular algorithm; each group of bars
is centered on the minimum value of the associated bin.
4.2
Influence of noise level
We generated 10 random mask of size 50 ? 50 with 200 entries sampled uniformly and a random
(50 ? 50) matrix of rank one. The multiplicative noise was chosen entry-wise independent, with
variance i = (i 1)/10 for each entry. Figure 2(a) compares the Mean Squared Error (MSE) for
three algorithms: Nuclear Norm (using the implementation Tomioka et al. [10]), OptSpace [5], and
Algorithm 3. It can be seen that on this particular mask, Algorithm 3 is competitive with the other
methods and even outperforms them for low noise.
4.3
Prediction of estimation errors
The data are the same as in Section 4.2, as are the compared algorithm. Figure 2(b) compares the
error of each of the methods with the variance predicted by Algorithm 4 each time the noise level
changed. The figure shows that for any of the algorithms, the mean of the actual error increases
with the predicted error, showing that the error estimate is useful for a-priori prediction of the actual
error - independently of the particular algorithm. Note that by construction of the data this statement
holds in particular for entry-wise predictions. Furthermore, in quantitative comparison Algorithm 4
also outperforms the other two in each of the bins. The qualitative reversal between the algorithms
in Figures 2(b) (a) and (b) comes from the different error measure and the conditioning on the bins.
5
Conclusion
In this paper, we have introduced an algebraic combinatorics based method for reconstructing and
denoising single entries of an incomplete and noisy matrix, and for calculating confidence bounds
of single entry estimations for arbitrary algorithms. We have evaluated these methods against stateof-the art matrix completion methods. Our method is competitive and yields the first known a
priori variance bounds for reconstruction. These bounds coarsely predict the performance of all the
methods. Furthermore, our method can reconstruct and estimate the error for single entries. It can
be restricted to using only a small number of nearby observations and smoothly improves as more
information is added, making it attractive for applications on large scale data. These results are an
instance of a general algebraic-combinatorial scheme and viewpoint that we argue is crucial for the
future understanding and practical treatment of big data.
8
References
[1] E. Acar, D. Dunlavy, and T. Kolda. Link prediction on evolving data using matrix and tensor factorizations. In Data Mining Workshops, 2009. ICDMW?09. IEEE International Conference on, pages 262?269.
IEEE, 2009.
[2] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task
structure learning. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in NIPS 20, pages
25?32. MIT Press, Cambridge, MA, 2008.
[3] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math.,
9(6):717?772, 2009. ISSN 1615-3375. doi: 10.1007/s10208-009-9045-5. URL http://dx.doi.
org/10.1007/s10208-009-9045-5.
[4] A. Goldberg, X. Zhu, B. Recht, J. Xu, and R. Nowak. Transduction with matrix completion: Three birds
with one stone. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors,
Advances in Neural Information Processing Systems 23, pages 757?765. 2010.
[5] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform.
Theory, 56(6):2980?2998, 2010. ISSN 0018-9448. doi: 10.1109/TIT.2010.2046205. URL http:
//dx.doi.org/10.1109/TIT.2010.2046205.
[6] F. J. Kir?aly, L. Theran, R. Tomioka, and T. Uno. The algebraic combinatorial approach for low-rank matrix
completion. Preprint, arXiv:1211.4116v4, 2012. URL http://arxiv.org/abs/1211.4116.
[7] F. J. Kir?aly and L. Theran. AlCoCoMa, 2013. http://mloss.org/software/view/524/.
[8] A. Menon and C. Elkan. Link prediction via matrix factorization. Machine Learning and Knowledge
Discovery in Databases, pages 437?452, 2011.
[9] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In L. K. Saul,
Y. Weiss, and L. Bottou, editors, Advances in NIPS 17, pages 1329?1336. MIT Press, Cambridge, MA,
2005.
[10] R. Tomioka, K. Hayashi, and H. Kashima. On the extension of trace norm to tensors. In NIPS Workshop
on Tensors, Kernels, and Machine Learning, 2010.
9
| 5119 |@word multitask:1 version:1 briefly:1 polynomial:9 norm:7 manageable:1 km:15 closure:2 theran:4 covariance:2 ld:3 initial:1 contains:1 selecting:1 outperforms:2 yet:1 dx:2 exposing:1 determinantal:1 additive:3 e22:1 plot:1 acar:1 v:1 precaution:1 half:2 instantiate:1 parameterization:1 short:1 provides:1 math:2 org:4 unbounded:1 along:1 constructed:2 qualitative:1 prove:1 consists:1 mask:25 expected:1 behavior:1 cand:1 multi:1 actual:3 enumeration:1 increasing:1 becomes:1 provided:1 estimating:5 moreover:2 bounded:3 notation:4 linearity:1 circuit:1 null:2 developed:1 finding:1 freie:1 guarantee:2 pseudo:1 quantitative:1 exactly:5 universit:1 rm:2 uk:1 brute:1 dunlavy:1 grant:1 platt:1 louis:1 positive:4 understood:1 treat:1 analyzing:1 path:35 bird:1 equivalence:1 ease:1 factorization:3 unique:3 practical:1 practice:2 union:1 definite:3 pontil:1 universal:2 evolving:1 confidence:4 suggest:1 get:1 close:1 ast:1 influence:1 writing:1 restriction:2 equivalent:1 deterministic:1 lagrangian:1 missing:10 map:1 straightforward:2 williams:1 starting:3 independently:3 convex:2 immediately:1 subgraphs:1 estimator:23 nuclear:5 oh:1 multicolor:1 coordinate:4 increment:1 resp:1 construction:2 kolda:1 exact:4 speak:1 goldberg:1 agreement:1 elkan:1 element:2 satisfying:1 database:1 observed:8 preprint:1 calculate:4 culotta:1 connected:2 cycle:12 mentioned:1 vanishes:1 complexity:1 traversal:1 depend:2 solving:2 tit:2 bipartite:3 efficiency:1 basis:8 translated:1 easily:1 fast:1 describe:1 london:1 effective:1 doi:4 sc:1 zemel:1 whose:1 heuristic:1 say:1 rennie:1 reconstruct:3 otherwise:5 statistic:1 transductive:1 noisy:6 ucl:1 propose:1 reconstruction:7 product:1 coming:1 inserting:1 relevant:3 hadamard:1 roweis:1 description:1 empty:1 produce:2 a11:6 wider:1 completion:18 ac:1 ij:8 odd:1 minor:2 a22:6 predicted:15 come:2 implies:2 convention:1 correct:1 stochastic:1 centered:2 sgn:1 a12:3 adjacency:1 bin:6 require:1 fix:2 proposition:3 elementary:3 traversed:2 extension:1 hold:4 practically:1 around:1 considered:1 normal:4 deciding:1 exp:3 atrue:1 predict:1 substituting:1 estimation:6 combinatorial:9 expose:1 bridge:1 individually:1 council:1 combinatorially:2 correctness:2 successfully:1 weighted:2 mit:2 jaakkola:1 corollary:1 ax:2 rank:28 indicates:1 likelihood:1 sense:2 nn:2 entire:1 relation:2 koller:1 interested:1 overall:1 orientation:1 stateof:1 priori:5 heatmap:2 art:3 special:1 fairly:1 equal:1 once:2 construct:2 sampling:3 broad:1 future:1 minimized:1 piecewise:1 inherent:1 few:2 oriented:10 randomly:1 replaced:1 geometry:1 ab:1 highly:1 possibility:1 mining:1 analyzed:1 yielding:1 pc:5 fu:1 capable:1 edge:14 necessary:1 nowak:1 respective:1 incomplete:3 plugged:1 logarithm:3 taylor:1 theoretical:3 minimal:2 instance:1 column:2 optspace:7 deviation:3 vertex:10 entry:63 subset:1 seventh:1 too:2 recht:2 density:3 fundamental:2 international:1 accessible:1 v4:1 together:1 squared:4 opposed:1 worse:1 bx:2 return:2 de:1 coefficient:1 combinatorics:2 depends:1 multiplicative:9 h1:3 performed:1 view:1 doing:1 red:8 start:1 competitive:2 parallel:1 e2e:12 identifiability:1 collaborative:1 contribution:1 square:1 variance:54 efficiently:1 yield:3 identify:2 generalize:1 reach:1 inform:1 definition:6 infinitesimal:2 against:1 nonetheless:1 naturally:1 proof:7 associated:1 sampled:2 treatment:2 color:3 knowledge:1 improves:1 obtainable:1 actually:1 higher:4 follow:1 methodology:1 wei:1 done:2 evaluated:1 generality:1 furthermore:3 keshavan:1 assessment:1 menon:1 name:1 reconstructible:5 concept:1 unbiased:9 true:6 contain:1 regularization:1 moore:1 attractive:1 adjacent:1 uniquely:2 oberwolfach:1 generalized:1 stone:1 complete:2 multiplicativity:1 wise:5 common:2 imputing:1 rmn:1 conditioning:1 extend:2 refer:1 measurement:1 cambridge:2 mathematics:1 similarly:2 erc:1 centre:1 shawe:1 add:2 multivariate:1 recent:1 scenario:1 store:1 binary:1 xe:2 seen:1 minimum:6 somewhat:1 determine:3 ii:4 branch:1 semi:2 full:2 reduces:1 smooth:1 calculation:1 coded:1 calculates:2 prediction:7 ae:11 noiseless:1 denominator:1 expectation:1 arxiv:2 kernel:16 whereas:1 completes:1 else:1 crucial:1 suspect:1 pkm:1 thing:1 lafferty:1 call:1 integer:1 ideal:2 iii:3 easy:1 independence:1 affect:1 topology:1 enumerating:1 c2c:1 whether:1 handled:1 url:3 algebraic:9 cause:1 remark:1 generally:1 useful:1 mloss:1 http:4 mirrored:1 problematic:1 sign:7 estimated:1 dummy:1 blue:7 discrete:1 write:1 coarsely:1 group:3 four:1 imputation:1 changing:1 ce:12 graph:10 sum:3 inverse:2 parameterized:1 powerful:1 lehmann:1 almost:1 appendix:2 entirely:1 bound:19 quadratic:1 oracle:1 binned:1 occur:1 infinity:1 uno:1 software:1 diffuse:1 nearby:1 span:2 performing:1 department:1 structured:2 alternate:1 reconstructing:3 intimately:1 lp:13 making:1 restricted:1 equation:3 discus:2 turn:1 describing:1 slack:1 singer:1 fp7:1 end:1 sending:1 reversal:1 available:3 generic:3 spectral:1 anymore:1 kashima:1 rp:3 binomial:1 denotes:1 include:2 completed:2 calculating:1 quantile:1 s10208:2 tensor:3 micchelli:1 already:1 added:1 occurs:1 strategy:1 parametric:1 costly:1 diagonal:1 amongst:1 link:3 berlin:2 argue:1 spanning:2 issn:2 modeled:1 index:4 multiplicatively:1 providing:1 minimizing:8 ying:1 setup:2 statement:2 trace:1 negative:1 kir:3 implementation:4 unknown:4 upper:1 observation:10 finite:2 e2p:2 arbitrary:1 aly:3 introduced:1 pair:5 required:1 namely:1 connection:1 nip:3 trans:1 bar:1 below:1 pattern:1 challenge:2 program:1 built:1 max:1 green:1 natural:1 difficulty:1 force:1 predicting:1 mn:1 zhu:1 scheme:3 axis:2 transitive:2 review:1 understanding:1 discovery:1 relative:1 filtering:1 proven:1 var:17 versus:1 srebro:1 sufficient:1 principle:2 viewpoint:2 editor:3 uncorrelated:2 row:1 changed:1 supported:2 free:1 copy:1 aij:26 formal:3 understand:1 institute:1 saul:1 absolute:1 distributed:1 calculated:4 ending:3 evaluating:1 author:1 qualitatively:1 collection:1 exponentiating:1 franz:1 icdmw:1 programme:1 reconstructed:3 approximate:2 scheff:1 keep:2 assumed:1 continuous:1 additionally:1 forest:1 mse:2 bottou:1 european:2 complex:1 necessarily:2 diag:1 did:1 dense:1 main:1 linearly:1 montanari:1 whole:1 big:2 noise:19 arise:1 denoise:1 allowed:1 xu:1 transduction:1 definiteness:1 lc:7 tomioka:3 position:1 explicit:3 a21:3 lq:1 comput:1 lie:2 candidate:1 bij:15 unwieldy:1 theorem:11 formula:3 specific:3 showing:2 admits:1 workshop:2 adding:2 hole:1 margin:1 gap:1 smoothly:1 depicted:1 logarithmic:3 eij:4 penrose:1 quantiative:1 partially:4 scalar:1 hayashi:1 mij:1 corresponds:2 minimizer:1 determines:1 ma:2 formulated:1 exposition:5 replace:1 change:1 typical:1 determined:4 infinite:1 uniformly:1 averaging:1 denoising:5 lemma:4 called:1 isomorphic:1 e:1 formally:1 college:1 argyriou:1 handling:1 |
4,553 | 512 | Structural Risk Minimization
for Character Recognition
I. Guyon, V. Vapnik, B. Boser, L. Bottou, and S. A. Solla
AT&T Bell Laboratories
Holmdel, NJ 07733, USA
Abstract
The method of Structural Risk Minimization refers to tuning the capacity
of the classifier to the available amount of training data. This capacity is influenced by several factors, including: (1) properties of the input
space, (2) nature and structure of the classifier, and (3) learning algorithm.
Actions based on these three factors are combined here to control the capacity of linear classifiers and improve generalization on the problem of
handwritten digit recognition.
1
1.1
RISK MINIMIZATION AND CAPACITY
EMPIRICAL RISK MINIMIZATION
A common way of training a given classifier is to adjust the parameters w in the
classification function F( x, w) to minimize the training error Etrain, i.e. the frequency of errors on a set of p training examples. Etrain estimates the expected risk
based on the empirical data provided by the p available examples. The method is
thus called Empirical Risk Minimization. But the classification function F(x, w*)
which minimizes the empirical risk does not necessarily minimize the generalization
error, i.e. the expected value of the risk over the full distribution of possible inputs
and their corresponding outputs. Such generalization error Egene cannot in general
be computed, but it can be estimated on a separate test set (Ete$t). Other ways of
471
472
Guyon, Vapnik, Boser, Bottou, and Solla
estimating Egene include the leave-one-out or moving control method [Vap82] (for
a review, see [Moo92]).
1.2
CAPACITY AND GUARANTEED RISK
Any family of classification functions {F(x, w)} can be characterized by its capacity.
The Vapnik-Chervonenkis dimension (or VC-dimension) [Vap82] is such a capacity,
defined as the maximum number h of training examples which can be learnt without
error, for all possible binary labelings. The VC-dimension is in some cases simply
given by the number of free parameters of the classifier, but in most practical cases
it is quite difficult to determine it analytically.
The VC-theory provides bounds. Let {F(x, w)} be a set of classification functions
of capacity h. With probability (1 - 71), for a number of training examples p > h,
simultaneously for all classification functions F{x, w), the generalization error Egene
is lower than a guaranteed risk defined by:
Eguarant
=
Etrain
+ ((p, h, Etrain, 71)
,
where ((p, h, Etrain, 71) is proportional to (0 = [h(ln2p/h+ 1) - 71l/p for small
and to Fa for Etrain close to one [Vap82,Vap92].
(1)
Etrain,
For a fixed number of training examples p, the training error decreases monotonically as the capacity h increases, while both guaranteed risk and generalization
error go through a minimum. Before the minimum, the problem is overdetermined:
the capacity is too small for the amount of training data. Beyond the minimum
the problem is underdetermined. The key issue is therefore to match the capacity
of the classifier to the amount of training data in order to get best generalization
performance. The method of Structural Risk Minimization (SRM) [Vap82,Vap92]
provides a way of achieving this goal.
1.3
STRUCTURAL RISK MINIMIZATION
Let us choose a family of classifiers {F(x, w)}, and define a structure consisting of
nested subsets of elements of the family: S1 C S2 c ... C Sr c .... By defining
such a structure, we ensure that the capacity hr of the subset of classifiers Sr is less
than hr+l of subset Sr+l. The method of SRM amounts to finding the subset sopt
for which the classifier F{x, w*) which minimizes the empirical risk within such
subset yields the best overall generalization performance.
Two problems arise in implementing SRM: (I) How to select sopt? (II) How to find
a good structure? Problem (I) arises because we have no direct access to Egene.
In our experiments, we will use the minimum of either E te3t or Eguarant to select
sopt, and show that these two minima are very close. A good structure reflects the
a priori knowledge of the designer, and only few guidelines can be provided from the
theory to solve problem (II). The designer must find the best compromise between
two competing terms: Etrain and i. Reducing h causes ( to decrease, but Etrain
to increase. A good structure should be such that decreasing the VC-dimension
happens at the expense of the smallest possible increase in training error. We now
examine several ways in which such a structure can be built.
Structural Risk Minimization for Character Recognition
2
PRINCIPAL COMPONENT ANALYSIS, OPTIMAL
BRAIN DAMAGE, AND WEIGHT DECAY
Consider three apparently different methods of improving generalization performance: Principal Component Analysis (a preprocessing transformation of input
space) [The89], Optimal Brain Damage (an architectural modification through
weight pruning) [LDS90], and a regularization method, Weight Decay (a modification of the learning algorithm) [Vap82]. For the case of a linear classifier, these
three approaches are shown here to control the capacity of the learning system
through the same underlying mechanism: a reduction of the effective dimension of
weight space, based on the curvature properties of the Mean Squared Error (M SE)
cost function used for training.
2.1
LINEAR CLASSIFIER AND MSE TRAINING
Consider a binary linear classifier F(x, w) = (}o(w T x), where w T is the transpose of
wand the function {}o takes two values 0 and 1 indicating to which class x belongs.
The VC-dimension of such classifier is equal to the dimension of input space 1 (or
the number of weights): h = dim(w) = dim(x) = n.
The empirical risk is given by:
p
Etrain
=
!
L(yk - {}o(wT xk?2
p
k=l
,
(2)
where xk is the kth example, and yk is the corresponding desired output. The
problem of minimizing Etrain as a function of w can be approached in different
ways [DH73], but it is often replaced by the problem of minimizing a Mean Square
Error (MSE) cost function, which differs from (2) in that the nonlinear function (}o
has been removed.
2.2
CURVATURE PROPERTIES OF THE MSE COST FUNCTION
The three structures that we investigate rely on curvature properties of the M S E
cost function. Consider the dependence of MSE on one of the parameters Wi.
Training leads to the optimal value wi for this parameter. One way of reducing
the capacity is to set Wi to zero. For the linear classifier, this reduces the VCdimension by one: h' = dim(w) - 1
n - 1. The MSE increase resulting from
setting Wi = 0 is to lowest order proportional to the curvature of the M SEat wi.
Since the decrease in capacity should be achieved at the smallest possible expense in
M S E increase, directions in weight space corresponding to small M S E curvature
are good candidates for elimination.
=
The curvature of the M S E is specified by the Hessian matrix H of second derivatives
of the M SE with respect to the weights. For a linear classifier, the Hessian matrix is
given by twice the correlation matrix of the training inputs, H = (2/p) 2:~=1 xkxkT.
The Hessian matrix is symmetric, and can be diagonalized to get rid of cross terms,
1 We assume, for simplicity, that the first component of vector x is constant and set to
1, so that the corresponding weight introduces the bias value.
473
474
Guyon, Vapnik, Boser, Bottou, and SaBa
to facilitate decisions about the simultaneous elimination of several directions in
weight space. The elements of the Hessian matrix after diagonalization are the
eigenvalues Ai; the corresponding eigenvectors give the principal directions wi of
the MSE. In the rotated axis, the increase IlMSE due to setting
0 takes a
simple form:
w: =
(3)
The quadratic approximation becomes an exact equality for the linear classifier.
Principal directions wi corresponding to small eigenvalues Ai of H are good candidates for elimination.
2.3
PRINCIPAL COMPONENT ANALYSIS
One common way of reducing the capacity of a classifier is to reduce the dimension
of the input space and thereby reduce the number of necessary free parameters
(or weights). Principal Component Analysis (PCA) is a feature extraction method
based on eigenvalue analysis. Input vectors x of dimension n are approximated by a
linear combination of m ~ n vectors forming an ortho-normal basis. The coefficients
of this linear combination form a vector x' of dimension m. The optimal basis in
the least square sense is given by the m eigenvectors corresponding to the m largest
eigenvalues of the correlation matrix of the training inputs (this matrix is 1/2 of H).
A structure is obtained by ranking the classifiers according to m. The VC-dimension
of the classifier is reduced to: h' = dim(x / ) = m.
2.4
OPTIMAL BRAIN DAMAGE
For a linear classifier, pruning can be implemented in two different but equivalent
ways: (i) change input coordinates to a principal axis representation, prune the
components corresponding to small eigenvalues according to PCA, and then train
with the M SE cost function; (ii) change coordinates to a principal axis representation, train with M S E first, and then prune the weights, to get a weight vector
w' of dimension m < n. Procedure (i) can be understood as a preprocessing,
whereas procedure (ii) involves an a posteriori modification of the structure of the
classifier (network architecture). The two procedures become identical if the weight
elimination in (ii) is based on a 'smallest eigenvalue' criterion.
Procedure (ii) is very reminiscent of Optimal Brain Damage (OBD), a weight pruning procedure applied after training. In OBD, the best candidates for pruning are
those weights which minimize the increase IlM SE defined in equation (3). The m
weights that are kept do not necessarily correspond to the largest m eigenvalues,
due to the extra factor of (wi*)2 in equation (3). In either implementation, the
VC-dimension is reduced to h' = dim(w / ) = dim(x / ) = m.
2.5
WEIGHT DECAY
Capacity can also be controlled through an additional term in the cost function, to
be minimized simultaneously with Al S E. Linear classifiers can be ranked according
to the norm IIwll2 = L1=1
of the weight vector. A structure is constructed
wJ
Structural Risk Minimization for Character Recognition
by allowing within the subset Sr only those classifiers which satisfy IIwll2 < Cr.
The positive bounds Cr form an increasing sequence: Cl < C2 < '" < Cr < ...
This sequence can be matched with a monotonically decreasing sequence of positive
Lagrange multipliers 11 ~ 12 ~ .. . ~ Ir > ... , such that our training problem stated
as the minimization of M S E within a specific set Sr is implemented through the
minimization of a new cost function: MSE + 'rllwIl 2 ? This is equivalent to the
Weight Decay procedure (WD). In a mechanical analogy, the term ,rllwll2 is like
the energy of a spring of tension Ir which pulls the weights to zero. As it is easier to
pull in the directions of small curvature of the MSE, WD pulls the weights to zero
predominantly along the principal directions of the Hessian matrix H associated
with small eigenvalues.
In the principal axis representation, the minimum w-Y of the cost function
MSE + ,lIwIl2, is a simple function of the minimum wO of the MSE in the
I -+ 0+ limit: wI = w? Ad(Ai + I)' The weight w? is attenuated by a factor
Ad (Ai + I)' Weights become negligible for I ~ Ai, and remain unchanged for
I ?: Ai? The effect of this attenuation can be compared to that of weight pruning.
Pruning all weights such that Ai < I reduces the capacity to:
n
h' =
L: 8-y(Ai) ,
(4)
i=1
where 8-y(u) = 1 if U > I and 8-y(u) = 0 otherwise.
By analogy, we introduce the Weight Decay capacity:
h' =
t
Ai .
i=1 Ai + I
(5)
This expression arises in various theoretical frameworks [Moo92,McK92]' and is
valid only for broad spectra of eigenvalues.
3
SMOOTHING, HIGHER-ORDER UNITS, AND
REGULARIZATION
Combining several different structures achieves further performance improvements.
The combination of exponential smoothing (a preprocessing transformation of input
space) and regularization (a modification of the learning algorithm) is shown here to
improve character recognition . The generalization ability is dramatically improved
by the further introduction of second-order units (an architectural modification).
3.1
SMOOTHING
Smoothing is a preprocessing which aims at reducing the effective dimension of
input space by degrading the resolution: after smoothing, decimation of the inputs
could be performed without further image degradation. Smoothing is achieved here
through convolution with an exponential kernel:
BLURRED.PIXEL(i,j) =
Lk Ll PIXEL(i + k,j + I) exp[-~Jk2 + 12]
IJ
'
Lk Ll exp[-fj k2 + 12]
475
476
Guyon, Vapnik, Boser, Bottou, and Soil a
where {3 is the smoothing parameter which determines the structure.
Convolution with the chosen kernel is an invertible linear operation. Such preprocessing results in no capacity change for a MSE-trained linear classifier. Smoothing
only modifies the spectrum of eigenvalues and must be combined with an eigenvaluebased regularization procedure such as OBD or WD, to obtain performance improvement through capacity decrease.
3.2
HIGHER-ORDER UNITS
Higher-order (or sigma-pi) units can be substituted for the linear units to get polynomial classifiers: F(x, w) = 6o(wTe(x)), where e(x) is an m-dimensional vector
(m > n) with components: x}, X2, ... , X n , (XIXt), (XIX2), .?? , (xnx n ), ???, (X1X2 ??. Xn) .
The structure is geared towards increasing the capacity, and is controlled by the order of the polynomial: Sl contains all the linear terms, S2 linear plus quadratic, etc.
Computations are kept tractable with the method proposed in reference [Pog75].
4
EXPERIMENTAL RESULTS
Experiments were performed on the benchmark problem of handwritten digit recognition described in reference [GPP+S9]. The database consists of 1200 (16 x 16)
binary pixel images, divided into 600 training examples and 600 test examples.
In figure 1, we compare the results obtained by pruning inputs or weights with
PCA and the results obtained with WD. The overall appearance of the curves is
very similar. In both cases, the capacity (computed from (4) and (5)) decreases as
a function of r, whereas the training error increases. For the optimum value r*,
the capacity is only 1/3 of the nominal capacity, computed solely on the basis of
the network architecture. At the price of some error on the training set, the error
rate on the test set is only half the error rate obtained with r 0+ .
=
The competition between capacity and training error always results in a unique
minimum of the guaranteed risk (1). It is remarkable that our experiments show
the minimum of Eguarant coinciding with the minimum of E te1t ? Any of these two
quantities can therefore be used to determine r*. In principle, another independent
test set should be used to get a reliable estimate of Egene (cross-validation). It
seems therefore advantageous to determine r* using the minimum of Eguarant and
use the test set to predict the generalization performance.
Using Eguarant to determine r* raises the problem of determining the capacity of the
system. The capacity can be measured when analytic computation is not possible.
Measurements performed with the method proposed by Vapnik, Levin, and Le Cun
yield results in good agreement with those obtained using (5). The method yields
an effective VC?dimension which accounts for the global capacity of the system,
including the effects of input data, architecture, and learning algorithm 2.
2 Schematically, measurements of the effective VC.dimension consist of splitting the
training data into two subsets. The difference between Etrain in these subsets is maximized. The value of h is extracted from the fit to a theoretical prediction for such maximal
discrepancy.
Structural Risk Minimization for Character Recognition
%error
1
12
11
10
Etest
9
a ,
8
-------I
I
I
Etrain
1
0
-5
-4
I
I
-2
-3
10CJ-;.-&
-1
*0
10
I
Etest
8
7
6
5
4
3
2
Etrain
1
0
-5
-4
-1
*0
1
0
1
"1
h'
9
b
-2
10;-;_ _
1
12
-3
"1
%error
11
-4
1
-3
-2
10;-;'-&
0
"1*
260
250
240
230
220
210
200
190
180
170
160
150
140
130
120
110
100
90
80
70
60
50
40
30
20
10
0
-5
-------------4
-3
-2
loq-qamma
-1
"1*
Figure 1: Percent error and capacity h' as a function of log r (linear classifier, no
smoothing): (a) weight/input pruning via peA (r is a threshold), (b) WD (r is the
decay parameter). The guaranteed risk has been rescaled to fit in the figure.
477
478
Guyon, Vapnik, Boser, Bottou, and Solla
Table 1: E teat for Smoothing, WD, and Higher-Order Combined.
I
(3
0
1
2
10
any
II
"I
"1*
"1*
"1*
'Y'~
0+
II
13t order
6.3
5.0
4.5
4.3
12.7
I 2nd order I
1.5
0.8
1.2
1.3
3.3
In table 1 we report results obtained when several structures are combined. Weight
"1* reduces E te3t by a factor of 2. Input space smoothing used in
decay with 'Y
conjunction with WD results in an additional reduction by a factor of 1.5. The
best performance is achieved for the highest level of smoothing, (3 = 10, for which
the blurring is considerable. As expected, smoothing has no effect in the absence
ofWD.
=
The use of second-order units provides an additional factor of 5 reduction in E te3t ?
For second order units, the number of weights scales like the square of the number
of inputs n 2 66049. But the capacity (5) is found to be only 196, for the optimum
values of "I and (3.
=
5
CONCLUSIONS AND EPILOGUE
Our results indicate that the VC-dimension must measure the global capacity of
the system. It is crucial to incorporate the effects of preprocessing of the input data
and modifications of the learning algorithm. Capacities defined solely on the basis
of the network architecture give overly pessimistic upper bounds.
The method of SRM provides a powerful tool for tuning the capacity. We have
shown that structures acting at different levels (preprocessing, architecture, learning mechanism) can produce similar effects. We have then combined three different
structures to improve generalization. These structures have interesting complementary properties. The introduction of higher-order units increases the capacity.
Smoothing and weight decay act in conjunction to decrease it.
Elaborate neural networks for character recognition [LBD+90,GAL +91] also incorporate similar complementary structures. In multilayer sigmoid-unit networks, the
capacity is increased through additional hidden units. Feature extracting neurons
introduce smoothing, and regularization follows from prematurely stopping training
before reaching the M S E minimum. When initial weights are chosen to be small,
this stopping technique produces effects similar to those of weight decay.
Structural Risk Minimization for Character Recognition
Acknowledgments
We wish to thank L. Jackel's group at Bell Labs for useful discussions, and are
particularly grateful to E. Levin and Y. Le Cun for communicating to us the unpublished method of computing the effective VC-dimension.
References
R.O. Duda and P.E. Hart. Pattern Classification And Scene Analysis.
Wiley and Son, 1973.
[GAL +91] I. Guyon, P. Albrecht, Y. Le Cun, J. Denker, and W. Hubbard. Design
of a neural network character recognizer for a touch terminal. Pattern
Recognition, 24(2), 1991.
[GPP+89] I. Guyon, I. Poujaud, L. Personnaz, G. Dreyfus, J. Denker, and Y. Le
Cun. Comparing different neural network architectures for classifying
handwritten digits. In Proceedings of the International Joint Conference
on Neural Networks, volume II, pages 127-132. IEEE, 1989.
[DH73]
[LBD+90] Y . Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Back-propagation applied to handwritten zipcode recognition. Neural Computation, 1(4), 1990.
[LDS90]
[McK92]
Y. Le Cun, J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S.
Touretzky, editor, Advances in Neural Information Processing Systems
2 (NIPS 89), pages 598-605. Morgan Kaufmann, 1990.
D. McKay. A practical bayesian framework for backprop networks. In
this volume, 1992.
[Mo092]
J. Moody. Generalization, weight decay and architecture selection for
non-linear learning systems. In this volume, 1992.
[Pog75]
T. Poggio. On optimal nonlinear associative recall.
(9)201, 1975.
[The89]
C. W . Therrien. Decision, Estimation and Classification: An Introduction to Pattem Recognition and Related Topics. Wiley, 1989.
[Vap82]
V. Vapnik. Estimation of Dependences Based on Empirical Data.
Springer-Verlag, 1982.
[Vap92]
V Vapnik. Principles of risk minimization for learning theory. In this
volume, 1992.
Bioi. Cybern.,
479
| 512 |@word polynomial:2 advantageous:1 norm:1 nd:1 seems:1 duda:1 thereby:1 reduction:3 initial:1 contains:1 chervonenkis:1 diagonalized:1 wd:7 comparing:1 must:3 reminiscent:1 analytic:1 half:1 xk:2 te3t:3 provides:4 along:1 constructed:1 direct:1 become:2 c2:1 consists:1 introduce:2 expected:3 examine:1 brain:5 terminal:1 decreasing:2 increasing:2 becomes:1 provided:2 estimating:1 underlying:1 matched:1 lowest:1 minimizes:2 degrading:1 finding:1 transformation:2 gal:2 nj:1 attenuation:1 act:1 classifier:26 k2:1 control:3 unit:10 before:2 positive:2 understood:1 negligible:1 limit:1 gpp:2 solely:2 plus:1 twice:1 practical:2 unique:1 acknowledgment:1 differs:1 digit:3 procedure:7 empirical:7 poujaud:1 bell:2 refers:1 get:5 cannot:1 close:2 selection:1 s9:1 risk:22 cybern:1 equivalent:2 modifies:1 go:1 resolution:1 simplicity:1 splitting:1 communicating:1 seat:1 pull:3 ortho:1 coordinate:2 nominal:1 exact:1 overdetermined:1 decimation:1 agreement:1 element:2 recognition:12 approximated:1 particularly:1 database:1 wj:1 solla:4 decrease:6 removed:1 rescaled:1 highest:1 yk:2 trained:1 raise:1 grateful:1 compromise:1 blurring:1 basis:4 joint:1 various:1 train:2 effective:5 approached:1 quite:1 solve:1 otherwise:1 ability:1 zipcode:1 associative:1 sequence:3 eigenvalue:10 maximal:1 combining:1 competition:1 optimum:2 produce:2 leave:1 rotated:1 vcdimension:1 measured:1 ij:1 implemented:2 involves:1 indicate:1 direction:6 pea:1 vc:11 elimination:4 implementing:1 backprop:1 generalization:12 pessimistic:1 underdetermined:1 etest:2 normal:1 exp:2 predict:1 achieves:1 smallest:3 recognizer:1 estimation:2 xnx:1 jackel:2 hubbard:2 largest:2 tool:1 reflects:1 minimization:14 always:1 aim:1 reaching:1 cr:3 conjunction:2 improvement:2 sense:1 dim:6 posteriori:1 stopping:2 hidden:1 labelings:1 pixel:3 issue:1 classification:7 overall:2 priori:1 smoothing:15 equal:1 extraction:1 identical:1 broad:1 discrepancy:1 minimized:1 report:1 few:1 ete:1 simultaneously:2 replaced:1 consisting:1 investigate:1 adjust:1 henderson:1 introduces:1 necessary:1 poggio:1 desired:1 theoretical:2 increased:1 cost:8 subset:8 mckay:1 srm:4 levin:2 too:1 learnt:1 combined:5 international:1 therrien:1 invertible:1 moody:1 squared:1 choose:1 derivative:1 albrecht:1 account:1 ilm:1 coefficient:1 blurred:1 satisfy:1 ranking:1 ad:2 performed:3 lab:1 apparently:1 minimize:3 square:3 ir:2 kaufmann:1 maximized:1 yield:3 correspond:1 handwritten:4 bayesian:1 simultaneous:1 influenced:1 touretzky:1 energy:1 frequency:1 associated:1 recall:1 knowledge:1 cj:1 back:1 dh73:2 higher:5 tension:1 coinciding:1 improved:1 jk2:1 correlation:2 saba:1 touch:1 nonlinear:2 propagation:1 facilitate:1 effect:6 usa:1 multiplier:1 analytically:1 regularization:5 equality:1 symmetric:1 laboratory:1 ll:2 x1x2:1 criterion:1 l1:1 fj:1 percent:1 image:2 dreyfus:1 predominantly:1 common:2 sigmoid:1 sopt:3 volume:4 measurement:2 ai:10 tuning:2 moving:1 access:1 geared:1 lbd:2 etc:1 curvature:7 belongs:1 verlag:1 binary:3 morgan:1 minimum:12 additional:4 prune:2 determine:4 monotonically:2 ii:9 full:1 reduces:3 match:1 characterized:1 cross:2 divided:1 hart:1 controlled:2 prediction:1 multilayer:1 kernel:2 achieved:3 whereas:2 schematically:1 crucial:1 extra:1 sr:5 extracting:1 structural:8 fit:2 architecture:7 competing:1 reduce:2 attenuated:1 expression:1 pca:3 wo:1 hessian:5 cause:1 action:1 dramatically:1 useful:1 se:4 eigenvectors:2 amount:4 reduced:2 sl:1 designer:2 estimated:1 overly:1 group:1 key:1 threshold:1 achieving:1 kept:2 wand:1 powerful:1 pattem:1 family:3 guyon:7 architectural:2 decision:2 holmdel:1 bound:3 guaranteed:5 quadratic:2 x2:1 scene:1 spring:1 according:3 combination:3 remain:1 son:1 character:8 wi:9 cun:6 modification:6 s1:1 happens:1 equation:2 mechanism:2 tractable:1 available:2 operation:1 denker:4 include:1 ensure:1 unchanged:1 personnaz:1 quantity:1 fa:1 damage:5 dependence:2 kth:1 separate:1 wte:1 thank:1 capacity:36 topic:1 minimizing:2 difficult:1 expense:2 sigma:1 stated:1 implementation:1 guideline:1 design:1 teat:1 allowing:1 upper:1 convolution:2 neuron:1 benchmark:1 howard:1 defining:1 prematurely:1 unpublished:1 mechanical:1 specified:1 boser:6 vap82:6 nip:1 beyond:1 pattern:2 built:1 including:2 reliable:1 ranked:1 rely:1 etrain:14 hr:2 improve:3 iiwll2:2 axis:4 lk:2 review:1 determining:1 interesting:1 proportional:2 analogy:2 remarkable:1 validation:1 principle:2 editor:1 classifying:1 pi:1 obd:3 soil:1 free:2 transpose:1 bias:1 curve:1 dimension:18 xn:1 valid:1 preprocessing:7 pruning:8 global:2 rid:1 spectrum:2 table:2 nature:1 improving:1 mse:11 bottou:5 necessarily:2 cl:1 substituted:1 s2:2 arise:1 complementary:2 elaborate:1 wiley:2 wish:1 exponential:2 candidate:3 specific:1 decay:10 consist:1 vapnik:9 diagonalization:1 easier:1 simply:1 appearance:1 forming:1 lagrange:1 springer:1 nested:1 determines:1 extracted:1 xix2:1 bioi:1 goal:1 towards:1 price:1 absence:1 considerable:1 change:3 reducing:4 wt:1 acting:1 principal:10 degradation:1 called:1 experimental:1 indicating:1 select:2 arises:2 incorporate:2 |
4,554 | 5,120 | Synthesizing Robust Plans
under Incomplete Domain Models
Tuan A. Nguyen
Subbarao Kambhampati
Minh Do
Arizona State University
[email protected]
Arizona State University
[email protected]
NASA Ames Research Center
[email protected]
Abstract
Most current planners assume complete domain models and focus on generating
correct plans. Unfortunately, domain modeling is a laborious and error-prone task,
thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe
the incompleteness of the model by providing annotations as to which parts of the
domain model may be incomplete. In such cases, the goal should be to synthesize
plans that are robust with respect to any known incompleteness of the domain. In
this paper, we first introduce annotations expressing the knowledge of the domain
incompleteness and formalize the notion of plan robustness with respect to an incomplete domain model. We then show an approach to compiling the problem of
finding robust plans to the conformant probabilistic planning problem, and present
experimental results with Probabilistic-FF planner.
1
Introduction
In the past several years, significant strides have been made in scaling up plan synthesis techniques.
We now have technology to routinely generate plans with hundreds of actions. All this work, however, makes a crucial assumption?that the action models of an agent are completely known in
advance. While there are domains where knowledge-engineering such detailed models is necessary
and feasible (e.g., mission planning domains in NASA and factory-floor planning), it is increasingly
recognized (c.f. [13]) that there are also many scenarios where insistence on correct and complete
models renders the current planning technology unusable. The incompleteness in such cases arises
because domain writers do not have the full knowledge of the domain physics. One tempting idea is
to wait until the models become complete, either by manual revision or by machine learning. Alas,
the users often don?t have the luxury of delaying their decision making. For example, although there
exist efforts [1, 26] that attempt to either learn models from scratch or revise existing ones, their
operation is contingent on the availability of successful plan traces, or access to execution experience. There is thus a critical need for planning technology that can get by with partially specified
domain models, and yet generate plans that are ?robust? in the sense that they are likely to execute
successfully in the real world.
This paper addresses the problem of formalizing the notion of plan robustness with respect to an
incomplete domain model, and connects the problem of generating a robust plan under such model
to conformant probabilistic planning [15, 11, 2, 4]. Following Garland & Lesh [7], we shall assume
that although the domain modelers cannot provide complete models, often they are able to provide
annotations on the partial model circumscribing the places where it is incomplete.In our framework,
these annotations consist of allowing actions to have possible preconditions and effects (in addition
to the standard necessary preconditions and effects).
As an example, consider a variation of the Gripper domain, a well-known planning benchmark
domain. The robot has one gripper that can be used to pick up balls, which are of two types light and
heavy, from one room and move them to another room. The modeler suspects that the gripper may
have an internal problem, but this cannot be confirmed until the robot actually executes the plan. If
it actually has the problem, the execution of the pick-up action succeeds only with balls that are not
1
heavy, but if it has no problem, it can always pickup all types of balls. The modeler can express
this partial knowledge about the domain by annotating the action with a statement representing the
possible precondition that balls should be light.
Incomplete domain models with such possible preconditions and effects implicitly define an exponential set of complete domain models, with the semantics that the real domain model is guaranteed
to be one of these. The robustness of a plan can now be formalized in terms of the cumulative probability mass of the complete domain models under which it succeeds. We propose an approach that
compiles the problem of finding robust plans into the conformant probabilistic planning problem.
We then present empirical results showing interesting relation between aspects such as the amount
domain incompleteness, solving time and plan quality.
2
Problem Formulation
e as D
e = hF, Ai, where F = {p1 , p2 , ..., pm } is a set of
We define an incomplete domain model D
propositions, A is a set of actions a, each might be incompletely specified. We denote T and F as
the true and false truth values of propositions. A state s ? F is a set of propositions. In addition
to proposition sets that are known as its preconditions P re(a) ? F , add effects Add(a) ? F and
delete effects Del(a) ? F , each action a ? A also contains the following annotations:
? Possible precondition set Pg
re(a) ? F \ P re(a) contains propositions that action a might
need as its preconditions.
g
g
? Possible add (delete) effect set Add(a)
? F \ Add(a) (Del(a)
? F \ Del(a)) contains
propositions that the action a might add (delete, respectively) after its execution.
In addition, each possible precondition, add and delete effect p of the action a is associated with
a weight wapre (p), waadd (p) and wadel (p) (0 < wapre (p), waadd (p), wadel (p) < 1) representing the
domain writer?s assessment of the likelihood that p will actually be realized as a precondition, add
and delete effect of a (respectively) during plan execution. Possible preconditions and effects whose
likelihood of realization is not given are assumed to have weights of 12 . Propositions that are not
listed in those ?possible lists? of an action are assumed to be not affecting or being affected by the
action.1
e we define its completion set hhDii
e as the set of complete
Given an incomplete domain model D,
domain models whose actions have all the necessary preconditions, adds and deletes, and a subg
set of the possible preconditions, possible adds and possible deletes. Since any subset of P
re(a),
g
g
Add(a) and Del(a) can be realized as preconditions and effects of action a, there are exponene = {D1 , D2 , ..., D2K }, where
tially large number of possible complete domain models Di ? hhDii
P
g
g
g
K =
a?A (|P re(a)| + |Add(a)| + |Del(a)|). For each complete model Di , we denote the
corresponding sets of realized preconditions and effects for each action a as P rei (a), Addi (a)
and Deli (a); equivalently, its complete sets of preconditions and effects are P re(a) ? P rei (a),
Add(a) ? Addi (a) and Del(a) ? Deli (a).
The projection of a sequence of actions ? from an initial state I according to an incomplete domain
e is defined in terms of the projections of ? from I according to each complete domain model
model D
e
Di ? hhDii:
e = {?(?, I, Di ) | Di ? hhDii}
e
?(?, I, D)
(1)
where the projection over complete models is defined in the usual STRIPS way, with one important
difference. Specifically, the result of applying an action a, which is complete in Di , in a state s is
defined as followed:
?(hai, s, Di ) = (s \ (Del(a) ? Deli (a))) ? (Add(a) ? Addi (a)),
if all preconditions of a are satisfied in s, and is taken to be s otherwise (rather than as an undefined
state)?in other words, actions in our setting have ?soft? preconditions and thus are applicable in any
state. Such a generous execution semantics (GES) is critical from an application point of view: With
1
Our incompleteness annotations therefore can also be used to model domains in which the domain writer
can only provide lists of known preconditions/effects of actions, and optionally specifying those known to be
not in the lists.
2
incomplete models, failure of actions should be expected, and the plan needs to be ?robustified?
against them during synthesis. The GES facilitates this by ensuring that the plan as a whole does
not have to fail if an individual action fails (without it, failing actions doom the plan and thus cannot
be supplanted). The resulting state of applying a sequence of complete actions ? = ha1 , ..., an i in s
with respects to Di is defined as:
?(?, s, Di ) = ?(han i, ?(ha1 , ..., an?1 i, s, Di ), Di ).
e is P
e = hD,
e I, Gi where I ? F is the set of
A planning problem with incomplete domain D
propositions that are true in the initial state (and all the remaining are false), and G is the set of
e if ? solves the problem in
goal propositions. An action sequence ? is considered a valid plan for P
e
e
at least one completion of hhDii. Specifically, ?Di ?hhDii
e ?(?, I, Di ) |= G. Given that hhDii can be
exponentially large in terms of possible preconditions and effects, validity is too weak to guarantee
on the quality of the plan. What we need is a notion that ? succeeds in most of the highly likely
e We do this in terms of a robustness measure, which will be presented in the next
completions of D.
section.
Modeling assumptions underlying our formulation: From the modeling point of view,
the possible precondition and effect sets can
be modeled at either the grounded action or
action schema level (and thus applicable to
all grounded actions sharing the same action
schema). From a practical point of view,
however, incompleteness annotations at ground
Figure 1: Decription of incomplete action schema
level hugely increase the burden on domain
pick-up in Gripper domain.
writers. In our formal treatment, we therefore
assume that annotations are specified at the schema level.
Since possible preconditions and effects can be represented as random variables, they can in principle be modeled using graphical models such as Makov Logic Networks and Bayesian Networks
[14]. Though it appears to be an interesting technical challenge, this would require a significant
additional knowledge input from the domain writer, and thus less likely to be helpful in practice. We
therefore assume that the possible preconditions and effects are uncorrelated, thus can be realized
independently (both within each action schema and across different ones).
Example: Figure 1 shows the description of incomplete action pick-up(?b - ball,?r - room) as
described above at the schema level. In addition to the possible precondition (light ?b) on the weight
of the ball ?b, we also assume that since the modeler is unsure if the gripper has been cleaned or
not, she models it with a possible add effect (dirty ?b) indicating that the action might make the
ball dirty. Those two possible preconditions and effects can be realized independently, resulting
in four possible candidate complete domains (assuming all other action schemas in the domain are
completely described).
3
A Robustness Measure for Plans
e = hD,
e I, Gi is defined as the cumulative probability
The robustness of a plan ? for the problem P
e
mass of the completions of D under which ? succeeds (in achieving the goals). More formally, let
Pr(Di ) be the probability distribution representing the modeler?s estimate of the probability that
e is the real model of the world (such that P
a given model in hhDii
e Pr(Di ) = 1). The
Di ?hhDii
robustness of ? is defined as follows:
X
e : hD,
e I, Gi) def
R(?, P
?
Pr(Di )
(2)
e
Di ?hhDii,?(?,I,D
i )|=G
e > 0, then ? is a valid plan for P.
e
It is easy to see that if R(?, P)
Note that given the uncorrelated incompleteness assumption, the probability Pr(Di ) for a model
e can be computed as the product of the weights wapre (p), waadd (p), and wadel (p) for all
Di ? hhDii
a ? A and its possible preconditions/effects p if p is realized in the model (or the product of their
?complement? 1 ? wapre (p), 1 ? waadd (p), and 1 ? wadel (p) if p is not realized).
3
Example: Figure 2 shows an example with an
e = hF, Ai with
incomplete domain model D
F = {p1 , p2 , p3 } and A = {a1 , a2 } and a
solution plan ? = ha1 , a2 i for the problem
e = hD,
e I = {p2 }, G = {p3 }i. The inP
g
complete model is: P re(a1 ) = ?, P
re(a1 ) =
g
{p1 }, Add(a1 ) = {p2 , p3 }, Add(a1 ) = ?,
g 1 ) = ?; P re(a2 ) = {p2 },
Del(a1 ) = ?, Del(a
g 2 ) = {p3 },
g
P re(a2 ) = ?, Add(a2 ) = ?, Add(a
g 2 ) = {p1 }. Given that
Del(a2 ) = ?, Del(a
the total number of possible preconditions and
effects is 3, the total number of completions
e
(|hhDii|)
is 23 = 8, for each of which the plan Figure 2: Example for a set of complete candidate
? may succeed or fail to achieve G, as shown domain models, and the corresponding plan stain the table. In the fifth candidate model, for tus. Circles with solid and dash boundary respecinstance, p1 and p3 are realized as precondition tively are propositions that are known to be T and
and add effect of a1 and a2 , whereas p1 is not might be F when the plan executes (see more in
a delete effect of action a2 . Even though a1 text).
could not execute (and thus p3 remains false in
the second state), the goal eventually is achieved by action a2 with respects to this candidate model.
Overall, there are two of eight candidate models where ? fails and six for which it succeeds. The
robustness value of the plan is R(?) = 43 if Pr(Di ) is the uniform distribution. However, if the
domain writer thinks that p1 is very likely to be a precondition of a1 and provides wapre
(p1 ) = 0.9,
1
the robustness of ? decreases to R(?) = 2 ? (0.9 ? 0.5 ? 0.5) + 4 ? (0.1 ? 0.5 ? 0.5) = 0.55 (as
intutively, the last four models with which ? succeeds are very unlikely to be the real one). Note that
under the standard non-generous execution semantics (non-GES) where action failure causes plan
failure, the plan ? would be mistakenly considered failing to achieve G in the first two complete
models, since a2 is prevented from execution.
3.1 A Spectrum of Robust Planning Problems
Given this set up, we can now talk about a spectrum of problems related to planning under incomplete domain models:
e assess the robustness of ?.
Robustness Assessment (RA): Given a plan ? for the problem P,
e generate the maximally robust
Maximally Robust Plan Generation (RG? ): Given a problem P,
plan ? ? .
Generating Plan with Desired Level of Robustness (RG? ): Given a problem Pe and a robustness
threshold ? (0 < ? ? 1), generate a plan ? with robustness greater than or equal to ?.
e and a cost bound c, generate a
Cost-sensitive Robust Plan Generation (RG?c ): Given a problem P
plan ? of maximal robustness subject to cost bound c (where the cost of a plan ? is defined
as the cumulative costs of the actions in ?).
e improve the robustness of
Incremental Robustification (RIc ): Given a plan ? for the problem P,
?, subject to a cost budget c.
The problem of assessing robustness of plans, RA, can be tackled by compiling it into a weighted
model-counting problem. The following theorem shows that RA with uniform distribution of candidate complete models is complete for #P complexity class [22], and thus the robustness assessment
problem is at least as hard as NP-complete.2
Theorem 1. The problem of assessing plan robustness with the uniform distribution of candidate
complete models is #P -complete.
For plan synthesis problems, we can talk about either generating a maximally robust plan, RG? , or
finding a plan with a robustness value above the given threshold, RG? . A related issue is that of the
2
The proof is based on a counting reduction from the problem of counting satisfying assignments for
MONOTONE-2SAT [23]. We omit it due to the space limit.
4
interaction between plan cost and robustness. Often, increasing robustness involves using additional
(or costlier) actions to support the desired goals, and thus comes at the expense of increased plan
cost. We can also talk about cost-constrained robust plan generation problem RG?c . Finally, in
practice, we are often interested in increasing the robustness of a given plan (either during iterative
search, or during mixed-initiative planning). We thus also have the incremental variant RIc . In the
next section, we will focus on the problem of synthesizing plans with at least a robustness value ?.
4
Synthesizing Robust Plans
e with an incomplete domain D,
e the ultimate goal is to synthesize a plan
Given a planning problem P
having a desired level of robustness, or one with maximal robustness value. In this section, we will
show that the problem of generating plan with at least ? robustness (0 < ? ? 1), can be compiled
into an equivalent conformant probabilistic planning problem. The most robust plan can then be
found with a sequence of increasing threshold values.
4.1 Conformant Probabilistic Planning
Following the formalism in [4], a domain in conformant probabilistic planning (CPP) is a tuple
D ? = hF ? , A? i, where F ? and A? are the sets of propositions and probabilistic actions, respectively.
?
A belief state b : 2F ? [0, 1] is a distribution of states s ? F ? (we denote s ? b if b(s) > 0). Each
action a? ? A? is specified by a set of preconditions P re(a? ) ? F ? and conditional effects E(a? ).
For each e = (cons(e), O(e)) ? E(a? ), cons(e) ? F ? is the condition set and O(e) determines the
set of outcomes ? = (P r(?), add(?), del(?)) that will add and delete proposition
P sets add(?), del(?)
into and from the resulting state with the probability P r(?) (0 ? P r(?) ? 1 , ??O(e) P r(?) = 1).
All condition sets of the effects in E(a? ) are assumed to be mutually exclusive and exhaustive. The
action a? is applicable in a belief state b if PP
re(a? ) ? s for all
Ps ? b, and the probability of a state
?
?
?
s in the resulting belief state is ba (s ) = s?P re(a? ) b(s) ??O? (e) P r(?), where e ? E(a? ) is
the conditional effect such that cons(e) ? s, and O? (e) ? O(e) is the set of outcomes ? such that
s? = s ? add(?) \ del(?).
Given the domain D ? , a problem P ? is a quadruple P ? = hD? , bI , G? , ?? i, where bI is an initial
belief state, G? is a set of goal propositions and ?? is the acceptable goal satisfaction probability. A
sequence of actions ? ? = (a?1 , ..., a?n ) is a solution plan for P ? if a?i is applicable in the belief state bi
(assuming b1 ? bI ), which results in bi+1 (1 ? i ? n), and it achieves all goal propositions with at
least ?? probability.
4.2 Compilation
e = hF, Ai and a planning problem P
e = hD,
e I, Gi, we now
Given an incomplete domain model D
e such
describe a compilation that translates the problem of synthesizing a solution plan ? for P
e ? ? to a CPP problem P ? . At a high level, the realization of possible preconditions
that R(?, P)
g
g
g
p ? P
re(a) and effects q ? Add(a),
r ? Del(a)
of an action a ? A can be understood as
add
being determined by the truth values of hidden propositions ppre
and radel that are certain
a , qa
(i.e. unchanged in any world state) but unknown. Specifically, the applicability of the action in
a state s ? F depends on possible preconditions p that are realized (i.e. ppre
= T), and their
a
truth values in s. Similarly, the values of q and r are affected by a in the resulting state only if
they are realized as add and delete effects of the action (i.e., qaadd = T, radel = T). There are
g
g
g
totally 2|P re(a)|+|Add(a)|+|Del(a)| realizations of the action a, and all of them should be considered
simultaneously in checking the applicability of the action and in defining corresponding resulting
states.
With those observations, we use multiple conditional effects to compile away incomplete knowledge
on preconditions and effects of the action a. Each conditional effect corresponds to one realization of
the action, and can be fired only if p = T whenever ppre
= T, and adding (removing) an effect q (r)
a
into (from) the resulting state depending on the values of qaadd (radel , respectively) in the realization.
While the partial knowledge can be removed, the hidden propositions introduce uncertainty into
the initial state, and therefore making it a belief state. Since actions are always applicable in our
formulation, resulting in either a new or the same successor state, preconditions P re(a) must be
modeled as conditions of all conditional effects. We are now ready to formally specify the resulting
domain D ? and problem P ? .
5
add
del
pre
For each action a ? A, we introduce new propositions ppre
a , qa , ra and their negations npa ,
add
del
g
g
g
nqa , nra for each p ? P re(a), q ? Add(a) and r ? Del(a) to determine whether they are
realized as preconditions and effects of a in the real domain.3 Let Fnew be the set of those new
propositions, then F ? = F ? Fnew is the proposition set of D ? .
Each action a? ? A? is made from one action a ? A such that P re(a? ) = ?, and E(a? ) consists of
g
g
g
2|P re(a)|+|Add(a)|+|Del(a)| conditional effects e. For each conditional effect e:
? cons(e) is the union of the following sets: (i) the certain preconditions P re(a), (ii) the set
of possible preconditions of a that are realized, and hidden propositions representing their
pre
g
realization: P re(a) ? {ppre
a |p ? P re(a)} ? {npa |p ? P re(a) \ P re(a)}, (iii) the set of
hidden propositions corresponding to the realization of possible add (delete) effects of a:
g
{qaadd |q ? Add(a)} ? {nqaadd |q ? Add(a)
\ Add(a)} ({radel |r ? Del(a)} ? {nradel |r ?
g
Del(a) \ Del(a)}, respectively);
? the single outcome ? of e is defined as add(?) = Add(a) ? Add(a), del(?) = Del(a) ?
Del(a), and P r(?) = 1,
g
g
g
re(a), Add(a) ? Add(a)
and Del(a) ? Del(a)
represent the sets of realized
where P re(a) ? P
preconditions and effects of the action. In other words, we create a conditional effect for each subset
of the union of the possible precondition and effect sets of the action a. Note that the inclusion of new
g
propositions derived from P re(a), Add(a), Del(a) and their ?complement? sets P
re(a) \ P re(a),
?
g
g
Add(a) \ Add(a), Del(a) \ Del(a) makes all condition sets of the action a mutually exclusive.
As for other cases (including those in which some precondition in P re(a) is excluded), the action
has no effect on the resulting state, they can be ignored. The condition sets, therefore, are also
exhaustive.
The initial belief state bI consists of 2|Fnew | states s? ? F ? such that p ? s? iff p ? I (?p ? F ),
e and with the probability Pr(Di ), as defined
each represents a complete domain model Di ? hhDii
in Section 3. The specification of bI includes simple Bayesian networks representing the relation
between variables in Fnew , e.g. ppre
and nppre
a
a , where the weights w(?) and 1 ? w(?) are used
to define conditional probability tables. The goal is G? = G, and the acceptable goal satisfaction
probability is ?? = ?. Theorem 2 shows the correctness of our compilation. It also shows that a plan
e with at least ? robustness can be obtained directly from solutions of the compiled problem P ? .
for P
e and ? ? = (a?1 , ..., a?n ) where a? is the
Theorem 2. Given a plan ? = (a1 , ..., an ) for the problem P,
k
?
e
compiled version of ak (1 ? k ? n) in P . Then R(?, P) ? ? iff ? ? achieves all goals with at least
? probability in P ? .
4.3 Experimental Results
In this section, we discuss the results of the compilation with Probabilistic-FF (PFF) on variants of
the Logistics and Satellite domains, where domain incompleteness is modeled on the preconditions
and effects of actions (respectively). Our purpose here is to observe and explain how plan length and
synthesizing time vary with the amount of domain incompleteness and the robustness threshold.4
Logistics: In this domain, each of the two cities C1 and C2 has an airport and a downtown area.
The transportation between the two distant cities can only be done by two airplanes A1 and A2 .
In the downtown area of Ci (i ? {1, 2}), there are three heavy containers Pi1 , ..., Pi3 that can be
moved to the airport by a truck Ti . Loading those containers onto the truck in the city Ci , however,
requires moving a team of m robots Ri1 , ..., Rim (m ? 1), initially located in the airport, to the
downtown area. The source of incompleteness in this domain comes from the assumption that each
pair of robots R1j and R2j (1 ? j ? m) are made by the same manufacturer Mj , both therefore
might fail to load a heavy container.5 The actions loading containers onto trucks using robots made
3
These propositions are introduced once, and re-used for all actions sharing the same schema with a.
The experiments were conducted using an Intel Core2 Duo 3.16GHz machine with 4Gb of RAM, and the
time limit is 15 minutes.
5
The uncorrelated incompleteness assumption applies for possible preconditions of action schemas specified for different manufacturers. It should not be confused here that robots R1j and R2j of the same manufacturer Mj can independently have fault.
4
6
by a particular manufacturer (e.g., the action schema load-truck-with-robots-of-M1 using robots of
manufacturer M1 ), therefore, have a possible precondition requiring that containers should not be
heavy. To simplify discussion (see below), we assume that robots of different manufacturers may
fail to load heavy containers, though independently, with the same probability of 0.7. The goal is
to transport all three containers in the city C1 to C2 , and vice versa. For this domain, a plan to
ship a container to another city involves a step of loading it onto the truck, which can be done by a
robot (after moving it from the airport to the downtown). Plans can be made more robust by using
additional robots of different manufacturer after moving them into the downtown areas, with the cost
of increasing plan length.
Satellite: In this domain, there are
?
m=1
m=2
m=3
m=4
m=5
two satellites S1 and S2 orbiting the
0.1
32/10.9
36/26.2
40/57.8
44/121.8
48/245.6
0.2
32/10.9
36/25.9
40/57.8
44/121.8
48/245.6
planet Earth, on each of which there
0.3
32/10.9
36/26.2
40/57.7
44/122.2
48/245.6
are m instruments Li1 , ..., Lim (i ?
0.4
?
42/42.1
50/107.9
58/252.8
66/551.4
{1, 2}, m ? 1) used to take images
0.5
?
42/42.0
50/107.9
58/253.1
66/551.1
0.6
?
?
50/108.2
58/252.8
66/551.1
of interested modes at some direction
0.7
?
?
?
58/253.1
66/551.6
in the space. For each j ? {1, ..., m},
0.8
?
?
?
?
66/550.9
the lenses of instruments Lij ?s were
0.9
?
?
?
?
?
made from a type of material Mj ,
which might have an error affecting Figure 3: The results of generating robust plans in Logistics
the quality of images that they take. domain.
If the material Mj actually has error,
all instruments Lij ?s produce mangled images. The knowledge of this incompleteness is modeled as
a possible add effect of the action taking images using instruments made from Mj (for instance, the
action schema take-image-with-instruments-M1 using instruments of type M1 ) with a probability of
pj , asserting that images taken might be in a bad condition. A typical plan to take an image using an
instrument, e.g. L14 of type M4 on the satellite S1 , is first to switch on L14 , turning the satellite S1
to a ground direction from which L14 can be calibrated, and then taking image. Plans can be made
more robust by using additional instruments, which might be on a different satellite, but should be of
different type of materials and can also take an image of the interested mode at the same direction.
Table 3 and 4 shows respectively the
?
m=1
m=2
m=3
m=4
m=5
results in the Logistics and Satellite
0.1
10/0.1
10/0.1
10/0.2
10/0.2
10/0.2
0.2
10/0.1
10/0.1
10/0.1
10/0.2
10/0.2
domains with ? ? {0.1, 0.2, ..., 0.9}
0.3
?
10/0.1
10/0.1
10/0.2
10/0.2
and m = {1, 2, ..., 5}. The num0.4
?
37/17.7
37/25.1
10/0.2
10/0.3
ber of complete domain models in
0.5
?
?
37/25.5
37/79.2
37/199.2
0.6
?
?
53/216.7
37/94.1
37/216.7
the two domains is 2m . For Satellite
0.7
?
?
?
53/462.0
?
domain, the probabilities pj ?s range
0.8
?
?
?
?
?
from 0.25, 0.3,... to 0.45 when m
0.9
?
?
?
?
?
increases from 1, 2, ... to 5. For
each specific value of ? and m, we re- Figure 4: The results of generating robust plans in Satellite
port l/t where l is the length of plan domain.
and t is the running time (in seconds).
Cases in which no plan is found within the time limit are denoted by ???, and those where it is provable that no plan with the desired robustness exists are denoted by ???.
As the results indicate, for a fixed amount of domain incompleteness (represented by m), the solution
plans in both domains tend to be longer with higher robustness threshold ?, and the time to synthesize
plans also increases. For instance, in Logistics with m = 5, the plan returned has 48 actions if
? = 0.3, whereas 66-length plan is needed if ? increases to 0.4. On the other hand, we also note that
more than the needed number of actions have been used in many solution plans. In the Logistics
domain, specifically, it is easy to see that the probability of successfully loading a container onto
a truck using robots of k (1 ? k ? m) different manufacturers is (1 ? 0.7k ). However, robots of
all five manufacturers are used in a plan when ? = 0.4, whereas using those of three manufacturers
is enough. The relaxation employed by PFF that ignores all but one condition in effects of actions,
while enables an upper-bound computation for plan robustness, is probably too strong and causes
unnecessary increasing in plan length.
Also as we would expect, when the amount of domain incompleteness (i.e., m) increases, it takes
longer time to synthesize plans satisfying a fixed robustness value ?. As an example, in the Satellite
domain, with ? = 0.6 it takes 216.7 seconds to synthesize a 37-length plan when m = 5, whereas it
is only 94.1 seconds for m = 4. Two exceptions can be seen with ? = 0.7 where no plan is found
7
within the time limit when m = 5, although a plan with robustness of 0.7075 exists in the solution
space. A probable explanation for this performance is the costly satisfiability tests and weighted
model-counting for computing resulting belief states during the search.
5
Related Work
There are currently very few research efforts in automated planning literature that explicitly consider
incompletely specified domain models. To our best knowledge, Garland and Lesh [7] were the first
discussing incomplete actions and generating robust plans under incomplete domain models. Their
notion of plan robustness, however, only has tenuous heuristic connections with the likelihood of
successful execution of plans. Weber and Bryce [24] consider a model similar to ours but assume
a non-GES formulation during plan synthesis?the plan fails if any action?s preconditions are not
satisfied. As we mention earlier, this semantics is significantly less helpful from an application point
of view; and it is arguably easier. Indeed, their method for generating robust plans relies on the
propagation of ?reasons? for failure of each action, assuming that every action before it successfully
executes. Such a propagation is no longer appliable for GES. Morwood and Bryce [16] studied the
problem of robustness assessment for the same incompleteness formulation in temporal planning
domains, where plan robustness is defined as the number of complete models under which temporal
constraints are consistent. The work by Fox et al [6] also explores robustness of plans, but their
focus is on temporal plans under unforeseen execution-time variations rather than on incompletely
specified domains. Eiter et al [5] introduces language K for planning under incomplete knowledge.
Their formulation is however different from ours in the type of incompleteness (world states v.s.
action models) and the notion of plans (secure/conformant plans v.s. robust plans). Our work can
also be categorized as one particular instance of the general model-lite planning problem, as defined
in [13], in which the author points out a large class of applications where handling incomplete
models is unavoidable due to the difficulty in getting a complete model.
As mentioned earlier, there were complementary approaches (c.f. [1, 26]) that attempt to either learn
models from scratch or revise existing ones, given the access to successful plan traces or execution
experience, which can then be used to solve new planning problems. These works are different
from ours in both the additional knowledge about the incomplete model (execution experience v.s.
incompleteness annotations), and the notion of solutions (correct with respect to the learned model
v.s. to candidate complete models).
Though not directly addressing formulation like ours, the work on k-fault plans for non-deterministic
planning [12] focused on reducing the ?faults? in plan execution. It is however based on the context
of stochastic/non-deterministic actions rather than incompletely specified ones. The semantics of
the possible preconditions/effects in our incomplete domain models fundamentally differs from nondeterministic and stochastic effects (c.f. work by Kushmerick et al [15]). While the probability of
success can be increased by continously executing actions with stochastic effects, the consequence
of unknown but deterministic effects is consistent over different executions.
In Markov Decision Processes (MDPs), a fairly rich body of work has been done for imprecise transition probabilities [19, 25, 8, 17, 3, 21], using various ways to represent imprecision/incompleteness
in the transition models. These works mainly seek for max-min or min-max optimal policies, assuming that Nature acts optimally against the agent. Much of these work is however done at atomic level
while we focus on factored planning models. Our incompleteness formulation can also be extended
for agent modeling, a topic of interest in multi-agent systems (c.f. [10, 9, 20, 18]).
6
Conclusion and Future Work
In this paper, we motivated the need for synthesizing robust plans under incomplete domain models.
We introduced annotations for expressing domain incompleteness, formalized the notion of plan
robustness, and showed an approach to compile the problem of generating robust plans into conformant probabilistic planning. We presented empirical results showing interesting relation between
aspects such as the amount of domain incompleteness, solving time and plan quality. We are working on a direct approach reasoning on correctness constraints of plan prefixes and partial relaxed
plans, constrasting it with our compilation method. We also plan to take successful plan traces as a
second type of additional inputs for generating robust plans.
Acknowledgements: This research is supported in part by the ARO grant W911NF-13-1-0023,
the ONR grants N00014-13-1-0176, N00014-09-1-0017 and N00014-07-1-1049, and the NSF grant
IIS201330813.
8
References
[1] E. Amir and A. Chang. Learning partially observable deterministic action models. Journal of Artificial
Intelligence Research, 33(1):349?402, 2008.
[2] D. Bryce, S. Kambhampati, and D. Smith. Sequential monte carlo in probabilistic planning reachability
heuristics. Proceedings of ICAPS06, 2006.
[3] K. Delgado, S. Sanner, and L. De Barros. Efficient solutions to factored mdps with imprecise transition
probabilities. Artificial Intelligence, 2011.
[4] C. Domshlak and J. Hoffmann. Probabilistic planning via heuristic forward search and weighted model
counting. Journal of Artificial Intelligence Research, 30(1):565?620, 2007.
[5] T. Eiter, W. Faber, N. Leone, G. Pfeifer, and A. Polleres. Planning under incomplete knowledge. Computational LogicCL 2000, pages 807?821, 2000.
[6] M. Fox, R. Howey, and D. Long. Exploration of the robustness of plans. In Proceedings of the National
Conference on Artificial Intelligence, volume 21, page 834, 2006.
[7] A. Garland and N. Lesh. Plan evaluation with incomplete action descriptions. In Proceedings of the
National Conference on Artificial Intelligence, pages 461?467, 2002.
[8] R. Givan, S. Leach, and T. Dean. Bounded-parameter markov decision processes. Artificial Intelligence,
122(1-2):71?109, 2000.
[9] P. Gmytrasiewicz and P. Doshi. A framework for sequential planning in multiagent settings. Journal of
Artificial Intelligence Research, 24(1):49?79, 2005.
[10] P. Gmytrasiewicz and E. Durfee. Rational coordination in multi-agent environments. Autonomous Agents
and Multi-Agent Systems, 3(4):319?350, 2000.
[11] N. Hyafil and F. Bacchus. Conformant probabilistic planning via csps. In Proceedings of the Thirteenth
International Conference on Automated Planning and Scheduling, pages 205?214, 2003.
[12] R. Jensen, M. Veloso, and R. Bryant. Fault tolerant planning: Toward probabilistic uncertainty models in
symbolic non-deterministic planning. In Proceedings of the 14th International Conference on Automated
Planning and Scheduling (ICAPS), volume 4, pages 235?344, 2004.
[13] S. Kambhampati. Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. In Proceedings of the National Conference on Artificial Intelligence,
volume 22, page 1601, 2007.
[14] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
[15] N. Kushmerick, S. Hanks, and D. Weld. An algorithm for probabilistic planning. Artificial Intelligence,
76(1-2):239?286, 1995.
[16] D. Morwood and D. Bryce. Evaluating temporal plans in incomplete domains. In Twenty-Sixth AAAI
Conference on Artificial Intelligence, 2012.
[17] A. Nilim and L. Ghaoui. Robust control of Markov decision processes with uncertain transition matrices.
Operations Research, 53(5):780?798, 2005.
[18] F. A. Oliehoek. Value-Based Planning for Teams of Agents in Stochastic Partially Observable Environments. PhD thesis, Informatics Institute, University of Amsterdam, Feb. 2010.
[19] J. Satia and R. Lave Jr. Markovian decision processes with uncertain transition probabilities. Operations
Research, pages 728?740, 1973.
[20] S. Seuken and S. Zilberstein. Formal models and algorithms for decentralized decision making under
uncertainty. Autonomous Agents and Multi-Agent Systems, 17(2):190?250, 2008.
[21] A. Shapiro and A. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and
Software, 17(3):523?542, 2002.
[22] L. Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189?201,
1979.
[23] L. Valiant. The complexity of enumeration and reliability problems. SIAM Journal on Computing,
8(3):410?421, 1979.
[24] C. Weber and D. Bryce. Planning and acting in incomplete domains. Proceedings of ICAPS11, 2011.
[25] C. White III and H. Eldeib. Markov decision processes with imprecise transition probabilities. Operations
Research, pages 739?749, 1994.
[26] Q. Yang, K. Wu, and Y. Jiang. Learning action models from plan examples using weighted max-sat.
Artificial Intelligence, 171(2):107?143, 2007.
9
| 5120 |@word version:1 loading:4 d2:1 seek:1 pg:1 pick:4 mention:1 solid:1 delgado:1 reduction:1 initial:5 contains:3 ours:4 ala:1 prefix:1 past:1 existing:2 lave:1 current:2 yet:1 must:1 planet:1 distant:1 enables:1 intelligence:11 asu:2 amir:1 smith:1 completeness:1 provides:1 ames:1 five:1 c2:2 direct:1 become:1 initiative:1 consists:2 nondeterministic:1 introduce:3 ra:4 indeed:1 expected:1 p1:8 planning:39 multi:4 core2:1 gov:1 costlier:1 enumeration:1 increasing:5 revision:1 totally:1 confused:1 underlying:1 formalizing:1 bounded:1 mass:3 durfee:1 duo:1 what:1 finding:3 guarantee:2 temporal:4 every:1 ti:1 act:1 bryant:1 icaps:1 control:1 grant:3 omit:1 arguably:1 before:1 engineering:1 understood:1 insistence:1 limit:4 consequence:1 ak:1 jiang:1 quadruple:1 might:9 studied:1 r1j:2 garland:3 specifying:1 compile:2 bi:7 range:1 practical:1 atomic:1 practice:2 union:2 differs:1 faber:1 area:4 empirical:2 npa:2 evolving:1 significantly:1 projection:3 imprecise:3 word:2 pre:2 inp:1 wait:1 symbolic:1 get:1 cannot:4 onto:4 hyafil:1 scheduling:2 doom:1 applying:2 context:1 equivalent:1 deterministic:5 dean:1 center:1 transportation:1 hugely:1 independently:4 focused:1 formalized:2 factored:2 seuken:1 hd:6 notion:7 variation:2 autonomous:2 user:1 stain:1 synthesize:5 satisfying:2 located:1 oliehoek:1 precondition:44 decrease:1 removed:1 mentioned:1 environment:2 complexity:3 solving:2 writer:6 completely:2 routinely:1 represented:2 talk:3 various:1 describe:1 monte:1 artificial:11 outcome:3 exhaustive:2 whose:2 heuristic:3 solve:1 annotating:1 otherwise:1 addi:3 gi:4 eldeib:1 think:1 sequence:5 propose:1 aro:1 mission:1 product:2 maximal:2 interaction:1 realization:7 supplanted:1 iff:2 fired:1 achieve:2 description:2 moved:1 getting:1 p:1 assessing:2 satellite:10 produce:1 generating:11 incremental:2 executing:1 depending:1 completion:5 strong:1 solves:1 p2:5 involves:2 come:2 r2j:2 indicate:1 reachability:1 direction:3 rei:2 correct:3 downtown:5 stochastic:5 exploration:1 successor:1 material:3 require:1 givan:1 proposition:23 probable:1 fnew:4 considered:3 ground:2 achieves:2 generous:2 a2:11 vary:1 purpose:1 failing:2 earth:1 compiles:1 applicable:5 currently:1 coordination:1 sensitive:1 correctness:2 create:1 successfully:3 city:5 weighted:4 vice:1 mit:1 always:2 rather:3 zilberstein:1 derived:1 focus:4 she:1 likelihood:3 mainly:1 secure:1 sense:1 helpful:2 unlikely:1 gmytrasiewicz:2 initially:1 hidden:4 relation:3 koller:1 interested:3 semantics:5 overall:1 issue:1 denoted:2 plan:110 constrained:1 airport:4 fairly:1 equal:1 once:1 having:1 represents:1 future:1 np:1 simplify:1 fundamentally:1 few:1 simultaneously:1 national:3 individual:1 m4:1 lite:2 connects:1 luxury:1 negation:1 attempt:2 friedman:1 interest:1 highly:1 evaluation:1 laborious:1 introduces:1 light:3 undefined:1 compilation:5 circumscribing:1 tuple:1 partial:4 necessary:3 experience:3 fox:2 incomplete:31 desired:4 re:32 circle:1 theoretical:1 delete:9 uncertain:2 increased:2 formalism:1 soft:1 instance:3 rao:1 modeling:4 earlier:2 w911nf:1 markovian:1 assignment:1 cost:10 applicability:2 addressing:1 subset:2 ri1:1 hundred:1 uniform:3 successful:4 conducted:1 bacchus:1 too:2 optimally:1 calibrated:1 explores:1 international:2 siam:1 probabilistic:16 physic:1 informatics:1 synthesis:4 unforeseen:1 thesis:1 aaai:1 satisfied:2 unavoidable:1 d2k:1 expert:1 de:1 stride:1 makov:1 availability:1 includes:1 permanent:1 explicitly:1 depends:1 view:4 schema:11 hf:4 annotation:10 ass:1 weak:1 bayesian:2 carlo:1 confirmed:1 executes:3 explain:1 manual:1 strip:1 sharing:2 whenever:1 sixth:1 failure:4 against:2 pp:1 doshi:1 associated:1 di:23 modeler:5 proof:1 con:4 eiter:2 rational:1 treatment:1 revise:2 knowledge:12 lim:1 satisfiability:1 formalize:1 rim:1 actually:4 nasa:3 appears:1 higher:1 specify:1 maximally:3 leone:1 formulation:8 execute:2 though:4 done:4 hank:1 until:2 hand:1 working:1 mistakenly:1 transport:1 web:1 assessment:4 propagation:2 del:31 mode:2 quality:4 effect:48 validity:1 requiring:1 true:2 excluded:1 imprecision:1 white:1 during:6 complete:28 subbarao:1 reasoning:1 image:9 weber:2 tively:1 exponentially:1 volume:3 m1:4 expressing:2 significant:2 versa:1 ai:3 pm:1 similarly:1 inclusion:1 language:1 reliability:1 moving:3 access:2 robot:13 han:1 compiled:3 specification:1 longer:3 add:45 feb:1 showed:1 csps:1 subg:1 ship:1 scenario:1 certain:2 n00014:3 onr:1 success:1 discussing:1 fault:4 leach:1 seen:1 contingent:1 additional:6 floor:1 greater:1 employed:1 relaxed:1 recognized:1 determine:1 tempting:1 ii:1 full:1 multiple:1 technical:1 veloso:1 long:1 prevented:1 a1:11 ensuring:1 variant:2 grounded:2 represent:2 achieved:1 c1:2 addition:4 affecting:2 whereas:4 thirteenth:1 source:1 crucial:1 container:9 probably:1 suspect:1 subject:2 tend:1 facilitates:1 counting:5 yang:1 iii:2 easy:2 enough:1 automated:3 switch:1 li1:1 idea:1 pi3:1 airplane:1 tus:1 translates:1 whether:1 six:1 motivated:1 ultimate:1 gb:1 effort:2 render:1 returned:1 cause:2 action:77 ignored:1 detailed:1 listed:1 amount:5 generate:5 shapiro:1 exist:1 nsf:1 deli:3 shall:1 affected:2 express:1 four:2 threshold:5 deletes:2 achieving:1 pj:2 ram:1 relaxation:1 monotone:1 year:1 uncertainty:3 place:1 planner:2 wu:1 p3:6 decision:7 ric:2 scaling:1 incompleteness:22 acceptable:2 def:1 bound:3 guaranteed:1 followed:1 dash:1 tackled:1 arizona:2 truck:6 constraint:2 software:1 weld:1 aspect:2 min:2 pi1:1 robustified:1 according:2 ball:7 unsure:1 jr:1 across:1 increasingly:1 making:3 s1:3 pr:6 ghaoui:1 taken:2 mutually:2 remains:1 discus:1 eventually:1 fail:4 needed:2 ge:5 instrument:8 operation:4 decentralized:1 eight:1 observe:1 manufacturer:10 away:1 robustness:40 compiling:2 remaining:1 dirty:2 running:1 tuan:1 graphical:2 unchanged:1 move:1 realized:13 hoffmann:1 costly:1 exclusive:2 usual:1 hai:1 incompletely:4 topic:1 reason:1 provable:1 toward:1 assuming:4 length:6 modeled:5 providing:1 equivalently:1 optionally:1 unfortunately:1 statement:1 expense:1 trace:3 synthesizing:6 ba:1 policy:1 unknown:2 twenty:1 allowing:1 upper:1 observation:1 markov:4 benchmark:1 minh:2 pickup:1 logistics:6 defining:1 extended:1 delaying:1 team:2 introduced:2 complement:2 pair:1 cleaned:1 specified:8 connection:1 learned:1 qa:2 address:1 able:2 below:1 challenge:2 including:1 max:3 explanation:1 belief:8 critical:2 satisfaction:2 difficulty:1 turning:1 sanner:1 representing:5 minimax:1 improve:1 technology:3 mdps:2 ready:1 bryce:5 lij:2 text:1 literature:1 acknowledgement:1 checking:1 satia:1 multiagent:1 expect:1 mixed:1 interesting:3 generation:3 age:1 tially:1 agent:11 consistent:2 principle:2 port:1 uncorrelated:3 heavy:6 prone:1 supported:1 last:1 formal:2 ber:1 institute:1 taking:2 fifth:1 ghz:1 ha1:3 boundary:1 evaluating:1 world:5 cumulative:3 valid:2 asserting:1 ignores:1 author:1 made:8 rich:1 transition:6 forward:1 nguyen:1 robustification:1 observable:2 implicitly:1 logic:1 tenuous:1 tolerant:1 sat:2 b1:1 assumed:3 unnecessary:1 l14:3 don:1 spectrum:2 search:3 iterative:1 table:3 learn:2 mj:5 robust:25 nature:1 kleywegt:1 barros:1 domain:79 whole:1 s2:1 complementary:1 categorized:1 body:1 intel:1 ff:2 fails:3 nilim:1 exponential:1 factory:1 candidate:8 pe:1 pfeifer:1 theorem:4 removing:1 unusable:1 load:3 minute:1 bad:1 specific:1 showing:2 jensen:1 list:3 consist:1 gripper:5 burden:1 false:3 adding:1 exists:2 sequential:2 ci:2 valiant:2 phd:1 execution:13 budget:1 pff:2 easier:1 rg:6 conformant:9 likely:4 amsterdam:1 partially:3 chang:1 applies:1 corresponds:1 truth:3 determines:1 kambhampati:3 relies:1 succeed:1 conditional:9 goal:13 room:3 feasible:1 hard:1 specifically:4 determined:1 typical:1 reducing:1 acting:1 total:2 cpp:2 lens:1 experimental:2 succeeds:6 indicating:1 formally:2 exception:1 internal:1 support:1 arises:1 d1:1 scratch:2 handling:1 |
4,555 | 5,121 | Which Space Partitioning Tree to Use for Search?
A. G. Gray
Georgia Tech.
Atlanta, GA 30308
[email protected]
P. Ram
Georgia Tech. / Skytree, Inc.
Atlanta, GA 30308
[email protected]
Abstract
We consider the task of nearest-neighbor search with the class of binary-spacepartitioning trees, which includes kd-trees, principal axis trees and random projection trees, and try to rigorously answer the question ?which tree to use for nearestneighbor search?? To this end, we present the theoretical results which imply that
trees with better vector quantization performance have better search performance
guarantees. We also explore another factor affecting the search performance ?
margins of the partitions in these trees. We demonstrate, both theoretically and
empirically, that large margin partitions can improve tree search performance.
1
Nearest-neighbor search
Nearest-neighbor search is ubiquitous in computer science. Several techniques exist for nearestneighbor search, but most algorithms can be categorized into two following groups based on the indexing scheme used ? (1) search with hierarchical tree indices, or (2) search with hash-based indices.
Although multidimensional binary space-partitioning trees (or BSP-trees), such as kd-trees [1], are
widely used for nearest-neighbor search, it is believed that their performances degrade with increasing dimensions. Standard worst-case analyses of search with BSP-trees in high dimensions usually
lead to trivial guarantees (such as, an ?(n) search time guarantee for a single nearest-neighbor query
in a set of n points). This is generally attributed to the ?curse of dimensionality? ? in the worst case,
the high dimensionality can force the search algorithm to visit every node in the BSP-tree.
However, these BSP-trees are very simple and intuitive, and still used in practice with success.
The occasional favorable performances of BSP-trees in high dimensions are attributed to the low
?intrinsic? dimensionality of real data. However, no clear relationship between the BSP-tree search
performance and the intrinsic data properties is known. We present theoretical results which link the
search performance of BSP-trees to properties of the data and the tree. This allows us to identify
implicit factors influencing BSP-tree search performance ? knowing these driving factors allows
us to develop successful heuristics for BSP-trees with improved search performance.
Each node in a BSP-tree represents a region of the space and
each non-leaf node has a left and right child representing a disjoint partition of this region with some separating hyperplane
and threshold (w, b). A search query on this tree is usually
answered with a depth-first branch-and-bound algorithm. Algorithm 1 presents a simplified version where a search query
is answered with a small set of neighbor candidates of any desired size by performing a greedy depth-first tree traversal to
a specified depth. This is known as defeatist tree search. We
are not aware of any data-dependent analysis of the quality of
the results from defeatist BSP-tree search. However, Verma et
al. (2009) [2] presented adaptive data-dependent analyses of
some BSP-trees for the task of vector quantization. These results show precise connections between the quantization performance of the BSP-trees and certain properties of the data
(we will present these data properties in Section 2).
1
Algorithm 1 BSP-tree search
Input: BSP-tree T on set S,
Query q, Desired depth l
Output: Candidate neighbor p
current tree depth lc ? 0
current tree node Tc ? T
while lc < l do
if hTc .w, qi + Tc .b ? 0 then
Tc ? Tc .left child
else
Tc ? Tc .right child
end if
Increment depth lc ? lc + 1
end while
p ? arg minr?Tc ?S kq ? rk.
(a) kd-tree
(b) RP-tree
(c) MM-tree
Figure 1: Binary space-partitioning trees.
We establish search performance guarantees for BSP-trees by linking their nearest-neighbor performance to their vector quantization performance and utilizing the recent guarantees on the BSP-tree
vector quantization. Our results provide theoretical evidence, for the first time, that better quantization performance implies better search performance1 . These results also motivate the use of large
margin BSP-trees, trees that hierarchically partition the data with a large (geometric) margin, for
better nearest-neighbor search performance. After discussing some existing literature on nearestneighbor search and vector quantization in Section 2, we discuss our following contributions:
? We present performance guarantees for Algorithm 1 in Section 3, linking search performance
to vector quantization performance. Specifically, we show that for any balanced BSP-tree and a
depth l, under some conditions, the worst-case search error incurred by the neighbor candidate
returned by Algorithm 1 is proportional to a factor which is
l/2
2 exp(?l/2?)
O
,
(n/2l )1/O(d) ? 2
where ? corresponds to the quantization performance of the tree (smaller ? implies smaller
quantization error) and d is closely related to the doubling dimension of the dataset (as opposed
to the ambient dimension D of the dataset). This implies that better quantization produces better
worst-case search results. Moreover, this result implies that smaller l produces improved worstcase performance (smaller l does imply more computation, hence it is intuitive to expect less
error at the cost of computation). Finally, there is also the expected dependence on the intrinsic
dimensionality d ? increasing d implies deteriorating worst-case performance. The theoretical
results are empirically verified in this section as well.
? In Section 3, we also show that the worst-case search error for Algorithm 1 with a BSP-tree T is
proportional to (1/?) where ? is the smallest margin size of all the partitions in T .
? We present the quantization performance guarantee of a large margin BSP tree in Section 4.
These results indicate that for a given dataset, the best BSP-tree for search is the one with the best
combination of low quantization error and large partition margins. We conclude with this insight
and related unanswered questions in Section 5.
2
Search and vector quantization
Binary space-partitioning trees (or BSP-trees) are hierarchical data structures providing a multiresolution view of the dataset indexed. There are several space-partitioning heuristics for a BSPtree construction. A tree is constructed by recursively applying a heuristic partition. The most
popular kd-tree uses axis-aligned partitions (Figure 1(a)), often employing a median split along the
coordinate axis of the data in the tree node with the largest spread. The principal-axis tree (PA-tree)
partitions the space at each node at the median along the principal eigenvector of the covariance
matrix of the data in that node [3, 4]. Another heuristic partitions the space based on a 2-means
clustering of the data in the node to form the two-means tree (2M-tree) [5, 6]. The random-projection
tree (RP-tree) partitions the space by projecting the data along a random standard normal direction
and choosing an appropriate splitting threshold [7] (Figure 1(b)). The max-margin tree (MM-tree) is
built by recursively employing large margin partitions of the data [8] (Figure 1(c)). The unsupervised
large margin splits are usually performed using max-margin clustering techniques [9].
Search. Nearest-neighbor search with a BSP-tree usually involves a depth-first branch-and-bound
algorithm which guarantees the search approximation (exact search is a special case of approximate
search with zero approximation) by a depth-first traversal of the tree followed by a backtrack up the
tree as required. This makes the tree traversal unpredictable leading to trivial worst-case runtime
1
This intuitive connection is widely believed but never rigorously established to the best of our knowledge.
2
guarantees. On the other hand, locality-sensitive hashing [10] based methods approach search in a
different way. After indexing the dataset into hash tables, a query is answered by selecting candidate
points from these hash tables. The candidate set size implies the worst-case search time bound. The
hash table construction guarantees the set size and search approximation. Algorithm 1 uses a BSPtree to select a candidate set for a query with defeatist tree search. For a balanced tree on n points,
the candidate set size at depth l is n/2l and the search runtime is O(l + n/2l ), with l ? log2 n. For
any choice of the depth l, we present the first approximation guarantee for this search process.
Defeatist BSP-tree search has been explored with the spill tree [11], a binary tree with overlapping
sibling nodes unlike the disjoint nodes in the usual BSP-tree. The search involves selecting the candidates in (all) the leaf node(s) which contain the query. The level of overlap guarantees the search
approximation, but this search method lacks any rigorous runtime guarantee; it is hard to bound the
number of leaf nodes that might contain any given query. Dasgupta & Sinha (2013) [12] show that
the probability of finding the exact nearest neighbor with defeatist search on certain randomized
partition trees (randomized spill trees and RP-trees being among them) is directly proportional to
the relative contrast of the search task [13], a recently proposed quantity which characterizes the
difficulty of a search problem (lower relative contrast makes exact search harder).
Vector Quantization. Recent work by Verma et al., 2009 [2] has established theoretical guarantees
for some of these BSP-trees for the task of vector quantization. Given a set of points S ? RD of n
points, the task of vector quantization is to generate a set of points M ? RD of size k n with
low average quantization error. The optimal quantizer for any region A is given by the mean ?(A)
of the data points lying in that region. The quantization error of the region A is then given by
VS (A) =
X
1
kx ? ?(A)k22 ,
|A ? S| x?A?S
(1)
and the average quantization error of a disjoint partition of region A into Al and Ar is given by:
VS ({Al , Ar }) = (|Al ? S|VS (Al ) + |Ar ? S|VS (Ar )) /|A ? S|.
(2)
Tree-based structured vector quantization is used for efficient vector quantization ? a BSP-tree of
depth log2 k partitions the space containing S into k disjoint regions to produce a k-quantization of
S. The theoretical results for tree-based vector quantization guarantee the improvement in average
quantization error obtained by partitioning any single region (with a single quantizer) into two disjoints regions (with two quantizers) in the following form (introduced by Freund et al. (2007) [14]):
Definition 2.1. For a set S ? RD , a region A partitioned into two disjoint regions {Al , Ar }, and a
data-dependent quantity ? > 1, the quantization error improvement is characterized by:
VS ({Al , Ar }) < (1 ? 1/?) VS (A).
(3)
Tree
PA-tree
RP-tree
kd-tree
2M-tree
MM-tree?
Definition of?
. PD
O(%2 ) : % =
i=1 ?i /?1
O(dc )
?
optimal (smallest
possible)
. PD
O(?) : ? =
?
/? 2
i
i=1
The quantization performance depends inversely on
the data-dependent quantity ? ? lower ? implies bet- Table 1: ? for various trees. ?1 , . . . , ?D are
ter quantization. We present the definition of ? for the sorted eigenvalues of the covariance matrix
different BSP-trees in Table 1. For the PA-tree, ? of A ? S in descending order, and dc < D is
depends on the ratio of the sum of the eigenval- the covariance dimension of A ? S. The results
ues of the covariance matrix of data (A ? S) to the for PA-tree and 2M-tree are due to Verma et al.
principal eigenvalue. The improvement rate ? for (2009) [2]. The PA-tree result can be improved to
the RP-tree depends on the covariance dimension O(%) from O(%2 ) with an additional assumption
of the data in the node A (? = O(dc )) [7], which [2]. The RP-tree result is in Freund et al. (2007)
roughly corresponds to the lowest dimensionality of [14], which also has the precise definition of dc .
an affine plane that captures most of the data covari- We establish the result for MM-tree in Section 4.
ance. The 2M-tree does not have an explicit ? but ? is the margin size of the large margin partition.
it has the optimal theoretical improvement rate for a No such guarantee for kd-trees is known to us.
single partition because the 2-means clustering objective is equal to |Al |V(Al ) + |Ar |V(Ar ) and minimizing this objective maximizes ?. The 2means problem is NP-hard and an approximate solution is used in practice. These theoretical results are valid under the condition that there are no outliers in A ? S. This is characterized as
2
maxx,y?A?S kx ? yk ? ?VS (A) for a fixed ? > 0. This notion of the absence of outliers was
first introduced for the theoretical analysis of the RP-trees [7]. Verma et al. (2009) [2] describe
outliers as ?points that are much farther away from the mean than the typical distance-from-mean?.
In this situation, an alternate type of partition is used to remove these outliers that are farther away
3
from the mean than expected. For ? ? 8, this alternate partitioning is guaranteed to reduce the data
diameter (maxx,y?A?S kx ? yk) of the resulting nodes by a constant fraction [7, Lemma 12], and
can be used until a region contain no outliers, at which point, the usual hyperplane partition can be
used with their respective theoretical quantization guarantees. The implicit assumption is that the
alternate partitioning scheme is employed rarely.
These results for BSP-tree quantization performance indicate that different heuristics are adaptive
to different properties of the data. However, no existing theoretical result relates this performance
of BSP-trees to their search performance. Making the precise connection between the quantization
performance and the search performance of these BSP-trees is a contribution of this paper.
3
Approximation guarantees for BSP-tree search
In this section, we formally present the data and tree dependent performance guarantees on the
search with BSP-trees using Algorithm 1. The quality of nearest-neighbor search can be quantized
in two ways ? (i) distance error and (ii) rank of the candidate neighbor. We present guarantees for
both notions of search error2 . For a query q and a set of points S and a neighbor candidate p ? S,
distance error (q) = minkq?pk
? 1, and rank ? (q) = |{r ? S : kq ? rk < kq ? pk}| + 1.
r?S kq?rk
Algorithm 1 requires the query traversal depth l as an input. The search runtime is O(l + (n/2l )).
The depth can be chosen based on the desired runtime. Equivalently, the depth can be chosen based
on the desired number of candidates m; for a balanced binary tree on a dataset S of n points with leaf
nodes containing a single point, the appropriate depth l = log2 n ? dlog2 me. We will be building
on the existing results on vector quantization error [2] to present the worst case error guarantee for
Algorithm 1. We need the following definitions to precisely state our results:
Definition 3.1. An ?-balanced split partitioning a region A into disjoint regions {A1 , A2 } implies
||A1 ? S| ? |A2 ? S|| ? ?|A ? S|.
For a balanced tree corresponding to recursive median splits, such as the PA-tree and the kd-tree,
? ? 0. Non-zero values of ? 1, corresponding to approximately balanced trees, allow us to
potentially adapt better to some structure in the data at the cost of slightly losing the tree balance.
For the MM-tree (discussed in detail in Section 4), ?-balanced splits are enforced for any specified
value of ?. Approximately balanced trees have a depth bound of O(log n)
3.1]. For
[8, Theorem
1+? l
n . For the
a tree with ?-balanced splits, the worst case runtime of Algorithm 1 is O l + 2
2M-tree, ?-balanced splits are not enforced. Hence the actual value of ? could be high for a 2M-tree.
Definition 3.2. Let B`2 (p, ?) = {r ? S : kp ? rk < ?} denote the points in S contained in a ball
of radius ? around some p ? S with
respect to the `2 metric. The
expansion constant of (S, `2 ) is
defined as the smallest c ? 2 such B`2 (p, 2?) ? c B`2 (p, ?) ?p ? S and ?? > 0.
Bounded expansion constants correspond to growth-restricted metrics [15]. The expansion constant
characterizes the data distribution, and c ? 2O(d) where d is the doubling dimension of the set S
with respect to the `2 metric. The relationship is exact for points on a D-dimensional grid (i.e.,
c = ?(2D )). Equipped with these definitions, we have the following guarantee for Algorithm 1:
P
2
Theorem 3.1. Consider a dataset S ? RD of n points with ? = 2n1 2 x,y?S kx ? yk , the BSP
tree T built on S and a query q ? RD with the following conditions :
(C1)
(C2)
(C3)
(C4)
Let (A ? (S ? {q}), `2 ) have an expansion constant at most c? for any convex set A ? RD .
Let T be complete till a depth L < log2 nc? /(1 ? log2 (1 ? ?)) with ?-balanced splits.
Let ? ? correspond to the worst quantization error improvement rate over all splits in T .
2
For any node A in the tree T , let maxx,y?A?S kx ? yk ? ?VS (A) for a fixed ? ? 8.
For ? = 1/(1 ? ?), the upper bound du on the distance of q to the neighbor candidate p returned
by Algorithm 1 with depth l ? L is given by
?
2 ?? ? (2?)l/2 ? exp(?l/2? ? )
kq ? pk ? du =
.
(4)
1/ log2 c?
(n/(2?)l )
?2
2
The distance error corresponds to the relative error in terms of the actual distance values. The rank is one
more than the number of points in S which are better neighbor candidates than p. The nearest-neighbor of q
has rank 1 and distance error 0. The appropriate notion of error depends on the search application.
4
Now ? is fixed, and ? is fixed for a dataset S. Then, for a fixed ?, this result implies that between
two types of BSP-trees on the same set and the same query, Algorithm 1 has a better worst-case guarantee on the candidate-neighbor distance for the tree with better quantization performance (smaller
? ? ). Moreover, for a particular tree with ? ? ? log2 e, du is non-decreasing in l. This is expected
because as we traverse down the tree, we can never reduce the candidate neighbor distance. At the
root level (l = 0), the candidate neighbor is the nearest-neighbor. As we descend down the tree,
the candidate neighbor distance will worsen if a tree split separates the query from its closer neighbors. This behavior is implied in Equation (4). For a chosen depth l in Algorithm 1, the candidate
1/ log2 c?
, implying deteriorating bounds du
neighbor distance is inversely proportional to n/(2?)l
with increasing c?. Since log2 c? ? O(d), larger intrinsic dimensionality implies worse guarantees as
expected from the curse of dimensionality. To prove Theorem 3.1, we use the following result:
Lemma 3.1. Under the conditions of Theorem 3.1, for any node A at a depth l in the BSP-tree T
l
on S, VS (A) ? ? (2/(1 ? ?)) exp(?l/? ? ).
This result is obtained by recursively applying the quantization error improvement in Definition 2.1
over l levels of the tree (the proof is in Appendix A).
Proof of Theorem 3.1. Consider the node A at depth l in the tree containing q, and let m = |A ? S|.
Let D = maxx,y?A?S kx ? yk, let d = minx?A?S kq ? xk, and let B`2 (q, ?) = {x ? A ? (S ?
{q}) : kq ? xk < ?}. Then, by the Definition 3.2 and condition C1,
D+d
D+2d
B` (q, D + d) ? c?log2 d D+d
d e |B
?log2 d d e ? c?log2 ( d ) ,
`2 (q, d)| = c
2
where the equality follows from the fact that B`2 (q, d) = {q}. Now B`2 (q, D + d) ? m. Using
1/ log2 c?
this above gives us m1/ log2 c? ? (D/d) + 2. By condition C2, m
> 2. Hence we have
p
1/ log2 c?
d ? D/(m
? 2). By construction and condition C4, D ? ?VS (A). Now m ? n/(2?)l .
Plugging this above and utilizing Lemma 3.1 gives us the statement of Theorem 3.1.
Nearest-neighbor search error guarantees. Equipped with the bound on the candidate-neighbor
distance, we bound the worst-case nearest-neighbor search errors as follows:
Corollary 3.1. Under the conditions of Theorem 3.1, for any query q at a desired depth l ? L
in Algorithm 1, the distance error (q) is bounded as (q) ? (du /d?q ) ? 1, and the rank ? (q) is
u
?
kq ? rk.
bounded as ? (q) ? c?dlog2 (d /dq )e , where d? = min
r?S
q
Proof. The distance error bound follows from the definition of distance error. Let R = {r ?
S : kq ? rk < du }. By definition, ? (q) ? |R| + 1. Let B`2 (q, ?) = {x ? (S ? {q}) : kq ? xk <
?}. Since B`2 (q, du ) contains q and R, and q ?
/ S, |B`2 (q, du )| = |R| + 1 ? ? (q). From Definition
log2 (du /d?
u
d
q )e |B
3.2 and Condition C1, |B (q, d )| ? c?
(q, d? )|. Using the fact that |B (q, d? )| =
`2
`2
q
`2
q
|{q}| = 1 gives us the upper bound on ? (q).
The upper bounds on both forms of search error are directly proportional to du . Hence, the BSPtree with better quantization performance has better search performance guarantees, and increasing
traversal depth l implies less computation but worse performance guarantees. Any dependence of
this approximation guarantee on the ambient data dimensionality is subsumed by the dependence
on ? ? and c?. While our result bounds the worst-case performance of Algorithm
1, an average case
performance guarantee on the distance error is given by Eq (q) ? du Eq 1/d?q ?1, and on the rank
?
u
is given by Eq ? (q) ? c?dlog2 d e Eq c?(log2 dq ) , since the expectation is over the queries q and du
does not depend on q. For the purposes of relative comparison among BSP-trees, the bounds on the
expected error depend solely on du since the term within the expectation over q is tree independent.
Dependence of the nearest-neighbor search error on the partition margins. The search error
bounds in Corollary 3.1 depend on the true nearest-neighbor distance d?q of any query q of which we
have no prior knowledge. However, if we partition the data with a large margin split, then we can
say that either the candidate neighbor is the true nearest-neighbor of q or that d?q is greater than the
size of the margin. We characterize the influence of the margin size with the following result:
Corollary 3.2. Consider the conditions of Theorem 3.1 and a query q at a depth l ? L in Algorithm
1. Further assume that ? is the smallest margin size on both sides of any partition in the tree T .uThen
the distance error is bounded as (q) ? du /? ? 1, and the rank is bounded as ? (q) ? c?dlog2 (d /?)e .
This result indicates that if the split margins in a BSP-tree can be increased without adversely affecting its quantization performance, the BSP-tree will have improved nearest-neighbor error guarantees
5
for the Algorithm 1. This motivated us to consider the max-margin tree [8], a BSP-tree that explicitly
maximizes the margin of the split for every split in the tree.
Explanation of the conditions in Theorem 3.1. Condition C1 implies that for any convex set
A ? RD , ((A ? (S ? {q})), `2 ) has an expansion constant at most c?. A bounded c? implies that no
subset of (S ? {q}), contained in a convex set, has a very high expansion constant. This condition
implies that ((S ? {q}), `2 ) also has an expansion constant at most c? (since (S ? {q}) is contained in
its convex hull). However, if (S ? {q}, `2 ) has an expansion constant c, this does not imply that the
data lying within any convex set has an expansion constant at most c. Hence a bounded expansion
constant assumption for (A?(S ?{q}), `2 ) for every convex set A ? RD is stronger than a bounded
expansion constant assumption for (S ? {q}, `2 )3 . Condition C2 ensures that the tree is complete
so that for every query q and a depth l ? L, there exists a large enough tree node which contains q.
Condition C3 gives us the worst quantization error improvement rate over all the splits in the tree.
2
Condition C4 implies that the squared data diameter of any node A (maxx,y?A?S kx ? yk ) is
within a constant factor of its quantization error VS (A). This refers to the assumption that the node
A contains no outliers as described in Section 3 and only hyperplane partitions are used and their
respective quantization improvement guarantees presented in Section 2 (Table 1) hold. By placing
condition C4, we ignore the alternate partitioning scheme used to remove outliers for simplicity
of analysis. If we allow a small fraction of the partitions in the tree to be this alternate split, a
similar result can be obtained since the alternate split is the same for all BSP-tree. For two different
kinds of hyperplane splits, if alternate split is invoked the same number of times in the tree, the
difference in their worst-case guarantees for both the trees would again be governed by their worstcase quantization performance (? ? ). However, for any fixed ?, a harder question is whether one
type of hyperplane partition violates the inlier condition more often than another type of partition,
resulting in more alternate partitions. And we do not yet have a theoretical answer for this4 .
Empirical validation. We examine our theoretical results with 4 datasets ? O PTDIGITS (D = 64,
n = 3823, 1797 queries), T INY I MAGES (D = 384, n = 5000, 1000 queries), MNIST (D =
784, n = 6000, 1000 queries), I MAGES (D = 4096, n = 500, 150 queries). We consider the
following BSP-trees: kd-tree, random-projection (RP) tree, principal axis (PA) tree, two-means (2M)
tree and max-margin (MM) tree. We only use hyperplane partitions for the tree construction. This is
because, firstly, the check for the presence of outliers (?2S (A) > ?VS (A)) can be computationally
expensive for large n, and, secondly, the alternate partition is mostly for the purposes of obtaining
theoretical guarantees. The implementation details for the different tree constructions are presented
in Appendix C. The performance of these BSP-trees are presented in Figure 2. Trees with missing
data points for higher depth levels (for example, kd-tree in Figure 2(a) and 2M-tree in Figures 2 (b)
& (c)) imply that we were unable to grow complete BSP-trees beyond that depth.
The quantization performance of the 2M-tree, PA-tree and MM-tree are significantly better than the
performance of the kd-tree and RP-tree and, as suggested by Corollary 3.1, this is also reflected in
their search performance. The MM-tree has comparable quantization performance to the 2M-tree
and PA-tree. However, in the case of search, the MM-tree outperforms PA-tree in all datasets. This
can be attributed to the large margin partitions in the MM-tree. The comparison to 2M-tree is not
as apparent. The MM-tree and PA-tree have ?-balanced splits for small ? enforced algorithmically,
resulting in bounded depth and bounded computation of O(l + n(1 + ?)l /2l ) for any given depth
l. No such balance constraint is enforced in the 2-means algorithm, and hence, the 2M-tree can be
heavily unbalanced. The absence of complete BSP 2M-tree beyond depth 4 and 6 in Figures 2 (b)
& (c) respectively is evidence of the lack of balance in the 2M-tree. This implies possibly more
computation and hence lower errors. Under these conditions, the MM-tree with an explicit balance
constraint performs comparably to the 2M-tree (slightly outperforming in 3 of the 4 cases) while
still maintaining a balanced tree (and hence returning smaller candidate sets on average).
3
A subset of a growth-restricted metric space (S, `2 ) may not be growth-restricted. However, in our case,
we are not considering all subsets; we only consider subsets of the form (A ? S) where A ? RD is a convex
set. So our condition does not imply that all subsets of (S, `2 ) are growth-restricted.
4
We empirically explore the effect of the tree type on the violation of the inlier condition (C4) in Appendix
B. The results imply that for any fixed value of ?, almost the same number of alternate splits would be invoked
for the construction of different types of trees on the same dataset. Moreover, with ? ? 8, for only one of the
datasets would a significant fraction of the partitions in the tree (of any type) need to be the alternate partition.
6
(a) O PTDIGITS
(b) T INY I MAGES
(c) MNIST
(d) I MAGES
Figure 2: Performance of BSP-trees with increasing traversal depth. The top row corresponds to quantization performance of existing trees and the bottom row presents the nearest-neighbor error (in terms of mean
rank ? of the candidate neighbors (CN)) of Algorithm 1 with these trees. The nearest-neighbor search error
graphs are also annotated with the mean distance-error of the CN (please view in color).
4
Large margin BSP-tree
We established that the search error depends on the quantization performance and the partition margins of the tree. The MM-tree explicitly maximizes the margin of every partition and empirical
results indicate that it has comparable performance to the 2M-tree and PA-tree in terms of the quantization performance. In this section, we establish a theoretical guarantee for the MM-tree quantization performance. The large margin split in the MM-tree is obtained by performing max-margin
clustering (MMC) with 2 clusters. The task of MMC is to find the optimal hyperplane (w? , b? ) from
the following optimization problem5 given a set of points S = {x1 , x2 , . . . , xm } ? RD :
m
min
w,b,?i
s.t.
X
1
?i
kwk22 + C
2
i=1
(5)
|hw, xi i + b| ? 1 ? ?i , ?i ? 0 ?i = 1, . . . , m
m
X
sgn(hw, xi i + b) ? ?m.
??m ?
(6)
(7)
i=1
MMC finds a soft max-margin split in the data to obtain two clusters separated by a large (soft)
margin. The balance constraint (Equation (7)) avoids trivial solutions and enforces an ?-balanced
split. The margin constraints (Equation (6)) enforce a robust separation of the data. Given a solution
to the MMC, we establish the following quantization error improvement rate for the MM-tree:
Theorem 4.1. Given a set of points S ? RD and a region A containing m points, consider an
?-balanced max-margin split (w, b) of the region A into {Al , Ar } with at most ?m support vectors
and a split margin of size ? = 1/ kwk. Then the quantization
error
improvement is given by:
?
? 2 (1 ? ?)2
?
VS ({Al , Ar }) ? 1 ?
PD
i=1
1??
1+?
?i
?
? VS (A),
(8)
where ?1 , . . . , ?D are the eigenvalues of the covariance matrix of A ? S.
The result indicates that larger margin sizes (large ? values) and a smaller number of support vectors
(small ?) implies better quantization performance. Larger
? ? implies smaller improvement, but ? is
generally restricted algorithmically in MMC. If ? = O( ?1 ) then this rate matches the best possible
quantization performance of the PA-tree (Table 1). We do assume that we have a feasible solution to
the MMC problem to prove this result. We use the following result to prove Theorem 4.1:
Proposition 4.1. [7, Lemma 15] Give a set S, for any partition {A1 , A2 } of a set A,
VS (A) ? VS ({A1 , A2 }) =
|A1 ? S||A2 ? S|
k?(A1 ) ? ?(A2 )k2 ,
|A ? S|2
(9)
where ?(A) is the centroid of the points in the region A.
5
This is an equivalent formulation [16] to the original form of max-margin clustering proposed by Xu et al.
(2005) [9]. The original formulation also contains the labels yi s and optimizes over it. We consider this form
of the problem since it makes our analysis easier to follow.
7
This result [7] implies that the improvement in the quantization error depends on the distance between the centroids of the two regions in the partition.
Proof of Theorem 4.1. For a feasible solution (w, b, ?i |i=1,...,m ) to the MMC problem,
m
X
|hw, xi i + b| ? m ?
m
X
?i .
i=1
i=1
P
Let x?i = hw,
x?i ? 0}| and ?
?p = ( i : x?i >0 x?i )/mp
Pxi i+b and mp = |{i : x?i > 0}| and mn = |{i :P
and ?
?n = ( i : x?i ?0 x?i )/mn . Then mp ?
?p ? mn ?
? n ? m ? i ?i .
Without loss of generality, we assume that mp ? mn . Then the balance constraint (Equation
P (7))
2
tells us that mp ? m(1 + ?)/2 and mn ? m(1 ? ?)/2. Then ?
?p ? ?
?n + ?(?
?p + ?
?n ) ? 2 ? m
i ?i .
P
2
Since ?
?p > 0 and ?n ? 0, |?
?p + ?
?n | ? (?
?p ? ?
?n ). Hence (1 + ?)(?
?p ? ?
?n ) ? 2 ? m i ?i . For
an unsupervised split, the data is always separable since there is no misclassification. This implies
that ?i? ? 1?i. Hence,
?
?p ? ?
?n ?
2?
2
|{i : ?i > 0}| /(1 + ?) ? 2
m
1??
1+?
,
(10)
since the term |{i : ?i > 0}| corresponds to the number of support vectors in the solution.
Cauchy-Schwartz implies that k?(Al ) ? ?(Ar )k ? |hw, ?(Al ) ? ?(Ar )i|/ kwk = (?
?p ? ?
?n )?,
since ?
?n = hw, ?(Al )i + b and ?
?p = hw, ?(Ar )i + b. From Equation (10), we can say
2
2
2
that k?(Al ) ? ?(Ar )k ? 4? 2 (1 ? ?) / (1 + ?) . Also, for ?-balanced splits, |Al ||Ar | ?
(1 ? ? 2 )m2 /4. Combining these into Equation (9) from Proposition 4.1, we have
VS (A) ? VS ({Al , Ar }) ? (1 ? ? 2 )? 2
1??
1+?
2
= ? 2 (1 ? ?)2
1??
1+?
.
(11)
Let Cov(A ? S) be the covariance matrix of the data contained in region A and ?1 , . . . , ?D be the
eigenvalues of Cov(A ? S). Then, we have:
VS (A) =
D
X
X
1
kx ? ?(A)k2 = tr (Cov(A ? S)) =
?i .
|A ? S| x?A?S
i=1
Then dividing Equation (11) by VS (A) gives us the statement of the theorem.
5
Conclusions and future directions
Our results theoretically verify that BSP-trees with better vector quantization performance and large
partition margins do have better search performance guarantees as one would expect. This means
that the best BSP-tree for search on a given dataset is the one with the best combination of good
quantization performance (low ? ? in Corollary 3.1) and large partition margins (large ? in Corollary
3.2). The MM-tree and the 2M-tree appear to have the best empirical performance in terms of the
search error. This is because the 2M-tree explicitly minimizes ? ? while the MM-tree explicitly
maximizes ? (which also implies smaller ? ? by Theorem 4.1). Unlike the 2M-tree, the MM-tree
explicitly maintains an approximately balanced tree for better worst-case search time guarantees.
However, the general dimensional large margin partitions in the MM-tree construction can be quite
expensive. But the idea of large margin partitions can be used to enhance any simpler space partition
heuristic ? for any chosen direction (such as along a coordinate axis or along the principal eigenvector of the data covariance matrix), a one dimensional large margin split of the projections of the
points along the chosen direction can be obtained very efficiently for improved search performance.
This analysis of search could be useful beyond BSP-trees. Various heuristics have been developed
to improve locality-sensitive hashing (LSH) [10]. The plain-vanilla LSH uses random linear projections and random thresholds for the hash-table construction. The data can instead be projected along
the top few eigenvectors of the data covariance matrix. This was (empirically) improved upon by
learning an orthogonal rotation of the projected data to minimize the quantization error of each bin in
the hash-table [17]. A nonlinear hash function can be learned using a restricted Boltzmann machine
[18]. If the similarity graph of the data is based on the Euclidean distance, spectral hashing [19]
uses a subset of the eigenvectors of the similarity graph Laplacian. Semi-supervised hashing [20]
incorporates given pairwise semantic similarity and dissimilarity constraints. The structural SVM
framework has also been used to learn hash functions [21]. Similar to the choice of an appropriate
BSP-tree for search, the best hashing scheme for any given dataset can be chosen by considering the
quantization performance of the hash functions and the margins between the bins in the hash tables.
We plan to explore this intuition theoretically and empirically for LSH based search schemes.
8
References
[1] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An Algorithm for Finding Best Matches in
Logarithmic Expected Time. ACM Transactions in Mathematical Software, 1977.
[2] N. Verma, S. Kpotufe, and S. Dasgupta. Which Spatial Partition Trees are Adaptive to Intrinsic
Dimension? In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2009.
[3] R.F. Sproull. Refinements to Nearest-Neighbor Searching in k-dimensional Trees. Algorithmica, 1991.
[4] J. McNames. A Fast Nearest-Neighbor Algorithm based on a Principal Axis Search Tree. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2001.
[5] K. Fukunaga and P. M. Nagendra. A Branch-and-Bound Algorithm for Computing k-NearestNeighbors. IEEE Transactions on Computing, 1975.
[6] D. Nister and H. Stewenius. Scalable Recognition with a Vocabulary Tree. In IEEE Conference
on Computer Vision and Pattern Recognition, 2006.
[7] S. Dasgupta and Y. Freund. Random Projection trees and Low Dimensional Manifolds. In
Proceedings of ACM Symposium on Theory of Computing, 2008.
[8] P. Ram, D. Lee, and A. G. Gray. Nearest-neighbor Search on a Time Budget via Max-Margin
Trees. In SIAM International Conference on Data Mining, 2012.
[9] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum Margin Clustering. Advances in
Neural Information Processing Systems, 2005.
[10] P. Indyk and R. Motwani. Approximate Nearest Neighbors: Towards Removing the Curse of
Dimensionality. In Proceedings of ACM Symposium on Theory of Computing, 1998.
[11] T. Liu, A. W. Moore, A. G. Gray, and K. Yang. An Investigation of Practical Approximate
Nearest Neighbor Algorithms. Advances in Neural Information Proceedings Systems, 2005.
[12] S. Dasgupta and K. Sinha. Randomized Partition Trees for Exact Nearest Neighbor Search. In
Proceedings of the Conference on Learning Theory, 2013.
[13] J. He, S. Kumar and S. F. Chang. On the Difficulty of Nearest Neighbor Search. In Proceedings
of the International Conference on Machine Learning, 2012.
[14] Y. Freund, S. Dasgupta, M. Kabra, and N. Verma. Learning the Structure of Manifolds using
Random Projections. Advances in Neural Information Processing Systems, 2007.
[15] D. R. Karger and M. Ruhl. Finding Nearest Neighbors in Growth-Restricted Metrics. In
Proceedings of ACM Symposium on Theory of Computing, 2002.
[16] B. Zhao, F. Wang, and C. Zhang. Efficient Maximum Margin Clustering via Cutting Plane
Algorithm. In SIAM International Conference on Data Mining, 2008.
[17] Y. Gong and S. Lazebnik. Iterative Quantization: A Procrustean Approach to Learning Binary
Codes. In IEEE Conference on Computer Vision and Pattern Recognition, 2011.
[18] R. Salakhutdinov and G. Hinton. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In Artificial Intelligence and Statistics, 2007.
[19] Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. Advances of Neural Information
Processing Systems, 2008.
[20] J. Wang, S. Kumar, and S. Chang. Semi-Supervised Hashing for Scalable Image Retrieval. In
IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[21] M. Norouzi and D. J. Fleet. Minimal Loss Hashing for Compact Binary Codes. In Proceedings
of the International Conference on Machine Learning, 2011.
[22] S. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory,
28(2):129?137, 1982.
9
| 5121 |@word version:1 stronger:1 covariance:9 tr:1 harder:2 recursively:3 liu:1 contains:4 selecting:2 karger:1 mages:4 outperforms:1 existing:4 current:2 deteriorating:2 yet:1 partition:43 remove:2 hash:10 v:20 greedy:1 leaf:4 implying:1 intelligence:3 plane:2 xk:3 ruhl:1 farther:2 quantizer:2 quantized:1 node:21 traverse:1 firstly:1 simpler:1 zhang:1 mathematical:1 along:7 constructed:1 c2:3 symposium:3 prove:3 pairwise:1 theoretically:3 expected:6 behavior:1 roughly:1 examine:1 salakhutdinov:1 decreasing:1 actual:2 curse:3 unpredictable:1 equipped:2 increasing:5 considering:2 moreover:3 bounded:10 maximizes:4 lowest:1 defeatist:5 kind:1 minimizes:1 eigenvector:2 developed:1 finding:3 guarantee:36 every:5 multidimensional:1 growth:5 runtime:6 returning:1 k2:2 schwartz:1 partitioning:10 appear:1 influencing:1 solely:1 approximately:3 might:1 nearestneighbor:3 practical:1 enforces:1 practice:2 recursive:1 ance:1 empirical:3 maxx:5 significantly:1 projection:7 refers:1 ga:2 applying:2 influence:1 descending:1 equivalent:1 missing:1 convex:7 simplicity:1 splitting:1 m2:1 insight:1 utilizing:2 unanswered:1 searching:1 notion:3 coordinate:2 increment:1 embedding:1 construction:8 heavily:1 exact:5 losing:1 us:4 pa:13 expensive:2 recognition:4 bottom:1 wang:2 capture:1 worst:17 descend:1 region:19 ensures:1 yk:6 balanced:17 intuition:1 pd:3 rigorously:2 traversal:6 motivate:1 depend:3 htc:1 upon:1 various:2 separated:1 fast:1 describe:1 kp:1 query:22 artificial:2 tell:1 choosing:1 apparent:1 heuristic:7 widely:2 larger:3 quite:1 say:2 cov:3 statistic:1 indyk:1 eigenvalue:4 neufeld:1 aligned:1 combining:1 till:1 multiresolution:1 intuitive:3 cluster:2 motwani:1 produce:3 inlier:2 mmc:7 develop:1 this4:1 gong:1 nearest:28 eq:4 dividing:1 involves:2 implies:22 indicate:3 direction:4 radius:1 closely:1 annotated:1 hull:1 sgn:1 violates:1 bin:2 investigation:1 proposition:2 secondly:1 iny:2 mm:20 lying:2 around:1 hold:1 normal:1 exp:3 driving:1 torralba:1 smallest:4 a2:6 purpose:2 favorable:1 label:1 sensitive:2 largest:1 kabra:1 always:1 finkel:1 bet:1 gatech:2 corollary:6 pxi:1 improvement:12 rank:8 indicates:2 check:1 tech:2 contrast:2 rigorous:1 centroid:2 dependent:5 arg:1 among:2 plan:1 spatial:1 special:1 equal:1 aware:1 never:2 represents:1 placing:1 unsupervised:2 future:1 np:1 few:1 algorithmica:1 n1:1 friedman:1 atlanta:2 subsumed:1 mining:2 violation:1 ambient:2 closer:1 respective:2 orthogonal:1 tree:204 indexed:1 euclidean:1 desired:5 theoretical:15 minimal:1 sinha:2 increased:1 soft:2 ar:16 minr:1 cost:2 subset:6 kq:10 successful:1 characterize:1 answer:2 international:4 randomized:3 siam:2 lee:1 enhance:1 squared:1 again:1 opposed:1 containing:4 possibly:1 worse:2 adversely:1 zhao:1 leading:1 lloyd:1 includes:1 inc:1 explicitly:5 mp:5 depends:6 stewenius:1 sproull:1 performed:1 try:1 view:2 root:1 kwk:2 characterizes:2 maintains:1 worsen:1 contribution:2 minimize:1 square:1 efficiently:1 correspond:2 identify:1 error2:1 norouzi:1 backtrack:1 comparably:1 cc:1 definition:13 proof:4 attributed:3 dataset:11 popular:1 knowledge:2 color:1 dimensionality:9 ubiquitous:1 hashing:8 higher:1 supervised:2 follow:1 reflected:1 improved:6 wei:1 formulation:2 generality:1 implicit:2 until:1 hand:1 nonlinear:2 overlapping:1 bsp:52 lack:2 quality:2 gray:3 bentley:1 building:1 effect:1 k22:1 contain:3 true:2 verify:1 hence:10 equality:1 moore:1 semantic:1 please:1 larson:1 procrustean:1 complete:4 demonstrate:1 performs:1 image:1 lazebnik:1 invoked:2 recently:1 rotation:1 empirically:5 linking:2 discussed:1 m1:1 he:1 significant:1 rd:11 vanilla:1 grid:1 lsh:3 similarity:3 recent:2 optimizes:1 certain:2 binary:8 success:1 discussing:1 outperforming:1 yi:1 preserving:1 additional:1 greater:1 employed:1 ii:1 branch:3 relates:1 semi:2 match:2 characterized:2 adapt:1 believed:2 retrieval:1 visit:1 a1:6 plugging:1 qi:1 laplacian:1 scalable:2 vision:3 metric:5 expectation:2 c1:4 affecting:2 else:1 median:3 grow:1 unlike:2 kwk22:1 mcnames:1 incorporates:1 structural:1 presence:1 ter:1 yang:1 split:29 enough:1 reduce:2 idea:1 cn:2 knowing:1 sibling:1 fleet:1 whether:1 motivated:1 returned:2 generally:2 useful:1 clear:1 eigenvectors:2 nister:1 diameter:2 generate:1 exist:1 disjoint:6 algorithmically:2 dasgupta:5 group:1 threshold:3 verified:1 ram:3 graph:3 fraction:3 sum:1 enforced:4 uncertainty:1 almost:1 separation:1 appendix:3 comparable:2 bound:16 followed:1 guaranteed:1 precisely:1 constraint:6 x2:1 software:1 answered:3 min:2 fukunaga:1 kumar:2 performing:2 separable:1 structured:1 alternate:11 combination:2 ball:1 kd:10 smaller:9 slightly:2 nagendra:1 partitioned:1 making:1 projecting:1 outlier:8 indexing:2 restricted:7 computationally:1 equation:7 discus:1 end:3 disjoints:1 hierarchical:2 occasional:1 appropriate:4 away:2 enforce:1 spectral:2 neighbourhood:1 rp:9 original:2 top:2 clustering:7 log2:17 spill:2 maintaining:1 establish:4 covari:1 implied:1 objective:2 question:3 quantity:3 dependence:4 usual:2 minx:1 distance:21 link:1 separate:1 separating:1 unable:1 degrade:1 me:1 manifold:2 cauchy:1 trivial:3 code:2 index:2 relationship:2 providing:1 ratio:1 minimizing:1 balance:6 equivalently:1 nc:1 mostly:1 quantizers:1 potentially:1 statement:2 implementation:1 boltzmann:1 kpotufe:1 upper:3 datasets:3 situation:1 hinton:1 precise:3 dc:4 introduced:2 required:1 specified:2 c3:2 connection:3 c4:5 learned:1 established:3 beyond:3 suggested:1 usually:4 pattern:4 xm:1 built:2 max:9 explanation:1 overlap:1 misclassification:1 difficulty:2 force:1 mn:5 representing:1 scheme:5 improve:2 imply:6 inversely:2 axis:7 nearestneighbors:1 ues:1 prior:1 geometric:1 literature:1 relative:4 freund:4 loss:2 expect:2 proportional:5 validation:1 incurred:1 affine:1 dq:2 verma:6 row:2 side:1 allow:2 neighbor:44 dimension:9 depth:32 valid:1 avoids:1 plain:1 vocabulary:1 adaptive:3 projected:2 simplified:1 refinement:1 employing:2 transaction:4 approximate:4 compact:1 ignore:1 cutting:1 dlog2:4 conclude:1 xi:3 fergus:1 search:83 iterative:1 table:10 learn:1 robust:1 obtaining:1 schuurmans:1 expansion:11 du:14 agray:1 pk:3 hierarchically:1 spread:1 child:3 categorized:1 x1:1 xu:2 georgia:2 lc:4 explicit:2 candidate:22 governed:1 hw:7 rk:6 theorem:14 down:2 removing:1 explored:1 svm:1 evidence:2 intrinsic:5 exists:1 quantization:58 mnist:2 dissimilarity:1 budget:1 margin:44 kx:8 easier:1 locality:2 tc:7 logarithmic:1 explore:3 pcm:1 contained:4 doubling:2 chang:2 corresponds:5 worstcase:2 acm:4 sorted:1 towards:1 absence:2 feasible:2 hard:2 specifically:1 typical:1 hyperplane:7 principal:7 lemma:4 rarely:1 select:1 formally:1 support:3 unbalanced:1 |
4,556 | 5,122 | Solving inverse problem of Markov chain
with partial observations
Tetsuro Morimura
IBM Research - Tokyo
[email protected]
Takayuki Osogami
IBM Research - Tokyo
[email protected]
Tsuyoshi Id?e
IBM T.J. Watson Research Center
[email protected]
Abstract
The Markov chain is a convenient tool to represent the dynamics of complex systems such as traffic and social systems, where probabilistic transition takes place
between internal states. A Markov chain is characterized by initial-state probabilities and a state-transition probability matrix. In the traditional setting, a major
goal is to study properties of a Markov chain when those probabilities are known.
This paper tackles an inverse version of the problem: we find those probabilities
from partial observations at a limited number of states. The observations include
the frequency of visiting a state and the rate of reaching a state from another. Practical examples of this task include traffic monitoring systems in cities, where we
need to infer the traffic volume on single link on a road network from a limited
number of observation points. We formulate this task as a regularized optimization problem, which is efficiently solved using the notion of natural gradient. Using synthetic and real-world data sets including city traffic monitoring data, we
demonstrate the effectiveness of our method.
1 Introduction
The Markov chain is a standard model for analyzing the dynamics of stochastic systems, including
economic systems [29], traffic systems [11], social systems [12], and ecosystems [6]. There is a large
body of the literature on the problem of analyzing the properties a Markov chain given its initial
distribution and a matrix of transition probabilities [21, 26]. For example, there exist established
methods for analyzing the stationary distribution and the mixing time of a Markov chain [23, 16].
In these traditional settings, the initial distribution and the transition-probability matrix are given a
priori or directly estimated.
Unfortunately, it is often impractical to directly measure or estimate the parameters (i.e., the initial
distribution and the transition-probability matrix) of the Markov chain that models a particular system under consideration. For example, one can analyze a traffic system [27, 24], including how the
vehicles are distributed across a city, by modeling the dynamics of vehicles as a Markov chain [11].
It is, however, difficult to directly measure the fraction of the vehicles that turns right or left at every
intersection.
The inverse problem of a Markov chain that we address in this paper is an inverse version of the
traditional problem of analyzing a Markov chain with given input parameters. Namely, our goal is
to estimate the parameters of a Markov chain from partial observations of the corresponding system.
In the context of the traffic system, for example, we seek to find the parameters of a Markov chain,
given the traffic volumes at stationary observation points and/or the rate of vehicles moving between
1
Figure 1: An inverse Markov chain problem. The traffic volume on every road is inferred from
traffic volumes at limited observation points and/or the rates of vehicles transitioning between these
points.
these points. Such statistics can be reliably estimated from observations with web-cameras [27],
automatic number plate recognition devices [10], or radio-frequency identification (RFID) [25],
whose availability is however limited to a small number of observation points in general (see Figure
1). By estimating the parameters of a Markov chain and analyzing its stationary probability, one can
infer the traffic volumes at unobserved points.
The primary contribution of this paper is the first methodology for solving the inverse problem of
a Markov chain when only the observation at a limited number of stationary observation points are
given. Specifically, we assume that the frequency of visiting a state and/or the rate of reaching a
state from another are given for a small number of states. We formulate the inverse problem of a
Markov chain as a regularized optimization problem. Then we can efficiently find a solution to the
inverse problem of a Markov chain based on the notion of natural gradient [3].
The inverse problem of a Markov chain has been addressed in the literature [9, 28, 31], but the
existing methods assume that sample paths of the Markov chain are available. Related work of
inverse reinforcement learning [20, 1, 32] also assumes that sample paths are available. In the
context of the traffic system, the sample paths corresponds to probe-car data (i.e., sequence of GPS
points). However, the probe-car data is expensive and rarely available in public. Even when it is
available, it is often limited to vehicles of a particular type such as taxis or in a particular region. On
the other hand, stationary observation data is often less expensive and more obtainable. For instance,
web-camera images are available even in developing countries such as Kenya [2].
The rest of this paper is organized as follows. In Section 2, preliminaries are introduced. In Section
3, we formulate an inverse problem of a Markov chain as a regularized optimization problem. A
method for efficiently solving the inverse problem of a Markov chain is proposed in Section 4. An
example of implementation is provided in Section 5. Section 6 evaluates the proposed method with
both artificial and real-world data sets including the one from traffic monitoring in a city.
2
Preliminaries
A discrete-time Markov chain [26, 21] is a stochastic process, X = (X0 , X1 , . . . ), where Xt
is a random variable representing the state at time t ? Z?0 . A Markov chain is defined by
the triplet {X , pI , pT }, where X = {1, . . . , |X |} is a finite set of states, where |X | ? 2 is
the number of states. The function, pI : X ? [0, 1], specifies the initial-state probability, i.e.,
pI (x) ? Pr(X0 = x), and pT : X ? X ? [0, 1] specifies the state transition probability from x to
x? , i.e., pT (x? | x) ? Pr(Xt+1 = x? | Xt = x), ?t ? Z?0 . Note the state transition is conditionally
independent of the past states given the current state, which is called the Markov property.
Any Markov chain can be converted into another Markov chain, called a Markov chain with restart,
by modifying the transition probability. There, the initial-state probability stays unchanged, but the
state transition probability is modified into p such that
p(x? | x) ? ?pT (x? | x) + (1 ? ?)pI (x? ),
(1)
where ? ? [0, 1) is a continuation rate of the Markov chain1 . In the limit of ? ? 1, this Markov
chain with restart is equivalent to the original Markov chain. In the following, we refer to p as the
(total) transition probability, while pT as a partial transition (or p-transition) probability.
1
The rate ? can depend on the current state x so that ? can be replaced with ?(x) throughout the paper. For
readability, we assume ? is a constant.
2
Our main targeted applications are (massive) multi-agent systems such as traffic systems. So, restarting a chain means that an agent?s origin of a trip is decided by the initial distribution, and the trip
ends at each time-step with probability 1 ? ?.
We model the initial probability and p-transition probability with parameters ? ? Rd1 and ? ? Rd2 ,
respectively, where d1 and d2 are the numbers of those parameters. So we will denote those as pI?
and pT? , respectively, and the total transition probability as p? , where ? is the total model parameter,
? ? ? Rd where d = d1 +d2 +1 and ?? ? ? ?1 (?) with the inverse of sigmoid function
? ? [?? , ?? , ?]
? ?1 . That is, Eq. (1) is rewritten as
p? (x? | x) ? ?pT? (x? | x) + (1 ? ?)pI? (x? ).
(2)
The Markov chain with restart can be represented as M(?) ? {X , pI? , pT? , ?}.
Also we make the following assumptions that are standard for the study of Markov chains and their
variants [26, 7].
Assumption 1 The Markov chain M(?) for any ? ? Rd is ergodic (irreducible and aperiodic).
Assumption 2 The initial probability pI? and p-transition probability pT? are differentiable everywhere with respect to ? ? Rd .2
Under Assumption 1, there exists a unique stationary probability, ?? (?), which satisfies the balance
equation:
?
?? (x? ) = x?X p(x? | x)?? (x), ?x? ? X ,
(3)
This stationary probability is equal to the limiting distribution and independent of the initial state:
?? (x? ) = limt?? Pr(Xt = x? | X0 = x, M(?)), ?x ? X . Assumption 2 indicates that the transition
probability p? is also differentiable for any state pair (x, x? ) ? X ? X with respect to any ? ? Rd .
Finally we define hitting probabilities for a Markov chain of indefinite-horizon. The Markov chain
?
is represented as M(?)
= {X , pT? , ?}, which evolves according to the p-transition probability pT? ,
not to p? , and terminates with a probability 1 ? ? at every step. The hitting probability of a state x?
given x is defined as
? | X0 = x, M(?)),
?
h? (x? | x) ? Pr(x? ? X
(4)
? = (X
?0, . . . , X
? T ) is a sample path of M(?)
?
where X
until the stopping time, T .
3 Inverse Markov Chain Problem
Here we formulate an inverse problem of the Markov chain M(?). In the inverse problem, the model
family M ? {M(?) | ? ? Rd }, which may be subject to a transition structure as in the road network,
is known or given a priori, but the model parameter ? is unknown. In Section 3.1, we define inputs
of the problem, which are associated with functions of the Markov chain. Objective functions for
the inverse problem are discussed in Section 3.2.
3.1
Problem setting
The input and output of our inverse problem of the Markov chain is as follows.
? Inputs are the values measured at a portion of states x ? Xo , where Xo ? X and usually |Xo | ?
|X |. The measured values include the frequency of visiting a state, f (x), x ? Xo . In addition,
the rate of reaching a state from another, g(x, x? ), might also be given for (x, x? ) ? Xo ? Xo ,
where g(x, x) is equal to 1.
In the context of traffic monitoring, f (x) denotes the number of vehicles that went through an
observation point, x; g(x, x? ) denotes the number of vehicles that went through x and x? in this
order divided by f (x).
? Output is the estimated parameter ? of the Markov chain M(?), which specifies the totaltransition probability function p? in Eq. (2).
2
We assume
?
??i
log pI? (x) = 0 when pI? (x) = 0, and an analogous assumption applies to pT? .
3
The first step of our formulation is to relate f and g to the Markov chain. Specifically, we assume
that the observed f is proportional to the true stationary probability of the Markov chain:
? ? (x) = cf (x), x ? Xo ,
(5)
where c is an unknown constant to satisfy the normalization condition. We further assume that the
observed reaching rate is equal to the true hitting probability of the Markov chain:
h? (x? | x) = g(x, x? ), (x, x? ) ? Xo ? Xo .
(6)
3.2
Objective function
Our objective is to find the parameter ? ? such that ??? and h?? well approximate ? ? and h? in
Eqs. (5) and (6). We use the following objective function to be minimized,
L(?) ? ?Ld (?) + (1 ? ?)Lh (?) + ?R(?),
(7)
where Ld and Lh are cost functions with respect to the quality of the approximation of ? ? and h? ,
respectively. These are specified in the following subsections. The function R(?) is the regularization term of ?, such as ||?||22 or ||?||1 . The parameters ? ? [0, 1] and ? ? 0 balance these cost
functions and the regularization term, which will be optimized by cross-validation. Altogether, our
problem is to find the parameter, ? ? = arg min??Rd L(?).
3.2.1
Cost function for stationary probability function
Because?
the constant c in Eq. (5) is unknown, for example, we cannot minimize a squared error
such as x?Xo (? ? (x) ? ?? (x))2 . Thus, we need to derive an alternative cost function of ?? that is
independent of c.
For Ld (?), one natural choice might be a Kullback-Leibler (KL) divergence,
?
?
? ? (x)
LKL
? ? (x) log
= ?c
f (x) log ?? (x) + o,
d (?) ?
?? (x)
x?Xo
x?Xo
where o is a term independent of ?. The minimizer of LKL
d (?) is independent of c. However,
KL
minimization
of
L
will
lead
to
a
biased
estimate.
This
is because LKL
will be decreased by
d
d
?
?
increasing x?Xo ?? (x) when the ratios ?? (x)/?? (x ), ?x, x? ? Xo are unchanged. This implies
?
?
has an unwanted sidethat, because of x?Xo ?? (x) + x?(X \Xo ) ?? (x) = 1, minimizing LKL
d
?
?
effect of overvaluing x?Xo ?? (x) and undervaluing x?(X \Xo ) ?? (x).
Here we propose an alternative form of Ld that can avoid this side-effect. It uses a logarithmic ratio
of the stationary probabilities such that
(
)2
(
)2
1 ? ?
? ? (i)
?? (i)
1 ? ?
f (i)
?? (i)
Ld (?) ?
log ?
? log
=
log
? log
(8)
2
? (j)
?? (j)
2
f (j)
?? (j)
i?Xo j?Xo
i?Xo j?Xo
The log-ratio of probabilities represents difference of information contents between these probabilities in the sense of information theory [17]. Thus this function can be regarded as a sum of squared
error between ? ? (x) and ?? (x) over x ? Xo with respect to relative information contents. In a
different point of view, Eq. (8) follows from maximizing the likelihood of ? under the assumption
that the observation ?log f (i) ? log f (j)? has a Gaussian white ?
noise N (0, ?2 ). This assumption
is satisfied when f (i) has a log-normal distribution, LN (?i , (?/ 2)2 ), independently for each i,
where ?i is the true location parameter, and the median of f (i) is equal to e?i .
3.2.2
Cost function for hitting probability function
Unlike Ld (?), there are several options for Lh (?). Examples of this cost function include a mean
squared error and mean absolute error. Here we use the following standard squared errors in the log
space, based on Eq. (6),
)2
1 ? ?(
Lh (?) ?
log g(i, j) ? log h? (j | i) .
(9)
2
i?Xo j?Xo
Eq. (9) follows from maximizing the likelihood of ? under the assumption that the observation
log g(i, j) has a Gaussian white noise, as with the case of Ld (?).
4
4
Gradient-based Approach
Let us consider (local) minimization of the objective function L(?) in Eq. (7). We adopt a gradientdescent approach for the problem, where the parameter ? is optimized by the following iteration,
with the notation ?? L(?) ? [?L(?)/??1 , . . . , ?L(?)/??d ]? ,
?t+1 = ?t ? ?t G?1
?t {??? Ld (?t ) + (1 ? ?)?? Lh (?t ) + ??? R(?t )} ,
(10)
where ?t > 0 is an updating rate. The matrix G?t ? Rd?d , called the metric of the parameter ?, is
an arbitrary bounded positive definite matrix. When G?t is set to the identity matrix of size d, Id ,
the update formula in Eq. (10) becomes an ordinary gradient descent. However, since the tangent
space at a point of a manifold representing M(?) is generally different from an orthonormal space
with respect to ? [4], one can apply the idea of natural gradient [3] to the metric G? , expecting to
make the procedure more efficient. This is described in Section 4.1.
The gradients of Ld and Lh in Eq. (10) are given as
)(
)
? ?(
f (i)
?? (i)
?? Ld (?) =
log
? log
?? log ?? (j) ? ?? log ?? (i) ,
f (j)
?? (j)
i?Xo j?Xo
? ?(
)
?? Lh (?) =
log g(i, j) ? log h? (j | i) ?? log h? (j | i).
i?Xo j?Xo
In order to implement the update rule of Eq. (10), we need to compute the gradient of the logarithmic
stationary probability ?? log ?? , the hitting probability h? , and its gradient ?? h? . In Sections 4.2,
we will describe how to compute them, which will turn out to be quite non-trivial.
4.1
Natural gradient
Usually, a parametric family of Markov chains, M? ? {M(?) | ? ? Rd }, forms a manifold structure with respect to the parameter ? under information divergences such as a KL divergence, instead
of the Euclidean structure. Thus the ordinary gradient, Eq. (10) with G? = Id , does not properly
reflect the differences in the sensitivities and the correlations between the elements of ?. Accordingly, the ordinary gradient is generally different from the steepest direction on the manifold, and
the optimization process with the ordinary gradient often becomes unstable or falls into a learning
plateau [5].
For efficient learning, we consider an appropriate G? based on the notion of the natural gradient
(NG) [5]. The NG represents the steepest descent direction of a function b(?) in a Riemannian
space3 by ?R??1 ?? b(?) when the Riemannian space is defined by the metric matrix R? . An appropriate Riemannian metric on a statistical model, Y , having parameters, ?, is known to be its Fisher
information matrix (FIM):4
?
?
y Pr(Y = y | ?)?? log Pr(Y = y | ?)?? log Pr(Y = y | ?) .
In our case, the joint probability, p? (x? |x)?? (x) for x, x? ? X , fully specifies M(?) at the steady
state, due to the Markovian property. Thus we propose to use the following G? in the update rule of
Eq. (10),
G? = F? + ?Id ,
(11)
?
where F? is the FIM of p? (x |x)?? (x),
(
)
?
?
F? ?
?? (x) ?? log ?? (x)?? log ?? (x)? +
p? (x? |x)?? log p? (x? |x)?? log p? (x? |x)? .
x?X
x? ?X
The second term with ? ? 0 in Eq. (11) will be needed to make G? positive definite.
3
A parameter space is a Riemannian space if the parameter ? ? Rd is on a Riemannian manifold defined
by a positive definite matrix called a Riemannian metric matrix R? ? Rd?d . The squared length of a small
incremental vector ?? connecting ? to ? + ?? in a Riemannian space is given by ????2R? = ??? R? ??.
4
The FIM is the unique metric matrix of the second-order Taylor expansion of the KL divergence, that is,
?
Pr(Y=y|?)
2
1
y Pr(Y = y | ?) log Pr(Y=y|?+??) ? 2 ????F? .
5
4.2
Computing the gradient
To derive an expression for computing ?? log ?? , we use the following notations for a vector and
a matrix: ?? ? [?? (1), . . . , ?? (|X |)]? and (P? )x,x? ? p? (x? |x). Then the logarithmic stationary
probability gradients with respect to ?i is given by
?
log ?? ? ??i log ?? = Diag(?? )?1 (Id ? P?? + ?? 1?d )?1 (??i P?? )?? ,
(12)
??i
where Diag(a) is a diagonal matrix whose diagonal elements consist of a vector a, log a is the
element-wise logarithm of a, and 1d denotes a column-vector of size d, whose elements are all 1.
In the remainder of this section, we prove Eq. (12) by using the following proposition.
Proposition 1 ([7]) If A ? Rd?d satisfies limK?? AK = 0, then the inverse of (I ? A) exists,
?K
and (I ? A)?1 = limK?? k=0 Ak .
Equation (3) is rewritten as ?? = P?? ?? . Note that ?? is equal to a normalized eigenvector
of P?? whose eigenvalue is 1. By taking a partial differential of Eq. (3) with respect to ?i ,
Diag(?? ) ??i log ?? = (??i P?? )?? + P?? Diag(?? ) ??i log ?? is obtained. Though we get the
following linear simultaneous equation of ??i log ?? ,
(13)
(Id ? P?? )Diag(?? ) ??i log ?? = (??i P?? )?? ,
?
the inverse of (Id ?P? )Diag(?? ) does not exist. It comes from the fact (Id ?P?? )Diag(?? )1d = 0.
So we add a term including 1?d Diag(?? )??i log ?? = 1?d ??i ?? = ??i {1?d ?? } = 0 to Eq. (13),
such that (Id ? P?? + ?? 1?d )Diag(?? ) ??i log ?? = (??i P?? )?? . The inverse of (Id ? P?? + ?? 1?d )
exists, because of Proposition 1 and the fact limk?? (P?? ? ?? 1?d )k = limk?? P??k ? ?? 1?d = 0.
The inverse of Diag(?? ) also exists, because ?? (x) is positive for any x ? X under Assumption 1.
Hence we get Eq. (12).
To derive expressions for computing h? and ?? log h? , we use the following notations: h? (x) ?
[h? (x | 1), . . . , h? (x | |X |)]? for the hitting probabilities in Eq. (4) and (PT? )x,x? ? pT? (x? | x) for
p-transition probabilities in Eq. (1). The hitting probabilities and those gradients with respect to ?i
can be computed as the following closed forms,
\x
h? (x) = (I|X | ? ?PT? )?1 ex|X | ,
??i log h? (x) = ? Diag(h? (x))?1 (I|X | ?
where ex|X |
(14)
\x
\x
?PT? )?1 (??i P? )h? (x),
(15)
denotes a column-vector of size |X |, where x?th element is 1 and all of the other elements
\x
are zero. The matrix PT? is defined as (I|X | ? ex|X | ex|X?| )PT? . We will derive Eqs. (14) and (15) as
follows. The hitting probabilities {
in Eq. (4) can be represented as the following recursive form,
h? (x? | x) =
1
?
? y?X pT? (y | x) h? (x? | y)
if x? = x
otherwise.
\x
This equation can be represented with the matrix notation as h? (x) = ex|X | + ?PT? h? (x). Because
\x
\x
the inverse of (I|X | ? ?PT? ) exists by Proposition 1 and limk?? (?PT? )k = 0, we get Eq. (14).
In a similar way, one can prove Eq. (15).
5
Implementation
For implementing the proposed method, parametric models of the initial probability pI? and the ptransition probability pT? in Eq. (1) need to be specified. We provide intuitive models based on the
logit function [8].
The initial probability is modeled as
exp(sI (x; ?))
pI? (x) ? ?
,
y?X exp(sI (y; ?))
(16)
where sI (x; ?) is a state score function with its parameter ? ? [? loc?, ? glo? ]? ? Rd1 consisting of
a local parameter ? loc ? R|X | and a global parameter ? glo ? Rd1 ?|X | . It is defined as
sI (x; ?) ? ?xloc + ?I (x)? ? glo ,
6
(17)
where ?I (x) ? Rd1 ?|X | is a feature vector of a state x. In the case of the road network, a state
corresponds to a road segment. Then ?I (x) may, for example [18], be defined with the indicators of
whether there are particular types of buildings near the road segment, x. We refer to the first term
and the second term of the right-hand side in Eq. (17) as a local preference and a global preference,
respectively. If a simpler model is preferred, either of them would be omitted.
Similarly, a p-transition probability model with the parameter ? ? [? loc?, ?1glo?, ?2glo? ]? is given as
{
/?
?
exp(sT (x, x? ; ?))
?
y?Xx exp(sT (x, y; ?)), if (x, x ) ? X ? Xx ,
pT? (x |x) ?
(18)
0
otherwise,
where Xx is a set of states connected from x, and sT (x, x? ; ?) is a state-to-state score function. It is
defined as
loc
? ? glo
sT (x, x? ; ?) ? ?(x,x
+ ?(x, x? )? ?2glo , (x, x? ) ? X ? Xx ,
? ) + ?T (x ) ?1
?
loc
loc
where ?(x,x
(? R x?X |Xx | ) corresponding to transition from x to x? , and
? ) is the element of ?
?
?T (x) and ?(x, x ) are feature vectors. For the road network, ?T (x) may be defined based on
the type of the road segment, x, and ?(x, x? ) may be defined based on the angle between x and
x? . Those linear combinations with the global parameters, ?1glo and ?2glo , can represent drivers?
preferences such as how much the drivers prefer major roads or straight routes to others.
Note that the pI? (x) and pT? (x? |x) presented in this section can be differentiated analytically.
Hence, F? in Eq. (11), ??i log ?? in Eq. (12), and ??i h? in Eq. (15) can be computed efficiently.
6
6.1
Experiments
Experiment on synthetic data
To study the sensitivities of the performance of our algorithm to the ratio of observable states, we
applied it to randomly synthesized inverse problems of 100-state Markov chains with a varying
number of observable states, |Xo | ? {5, 10, 20, 35, 50, 70, 90}. The linkages between states were
randomly generated in the same way as [19]. The values of pI and pT are determined in two stages.
First, the basic initial probabilities, pI? , and the basic transition probabilities, pT? , were determined
based on Eqs. (16) and (18), where every element of ?, ?, ?I (x), ?T (x), and ?T (x, x? ) was drawn
independently from the normal distribution N (0, 12 ). Then we added noises to pI? and pT? , which
are ideal for our algorithm, by using the Dirichlet distribution Dir, such that pI = 0.7pI? + 0.3?
with ? ? Dir(0.3 ? 1|X | ). Then we sampled the visiting frequencies f (x) and the hitting rates
g(x, x? ) for every x, x? ? Xo from this synthesized Markov chain.
We used Eqs. (16) and (18) for the models and Eq. (7) for the objective of our method. In
Eq. (7), we set ? = 0.1 and R(?) = ???22 , and ? was determined with a cross-validation. We
evaluated the quality of our solution with the relative mean absolute error (RMAE), RMAE =
?
?
|f (x)??
c?? (x)|
1
? is a scaling value given by c? = 1/|Xo | x?Xo f (x). As
x?X \Xo max{f (x), 1} , where c
|X \Xo |
a baseline method, we use Nadaraya-Watson kernel regression (NWKR) [8] whose kernel is computed based on the number of hops in the minimum path between two states. Note that the NWKR
could not use g(x, x? ) as an input, because this is a regression problem of f (x). Hence, for a fair
comparison, we also applied a variant of our method that does not use g(x, x? ).
Figure 2 (A) shows the mean and standard deviation of the RMAEs. The proposed method gives
clearly better performance than the NWKR. This is mainly due to the fact that the NWKR assumes
that all propagations of the observation from a link to another connected link are equally weighted.
In contrast, our method incorporates such weight in the transition probabilities.
6.2
Experiment on real-world traffic data
We tested our method through a city-wide traffic-monitoring task as shown in Fig. 1. The goal is to
estimate the traffic volume along an arbitrary road segment (or link of a network), given observed
traffic volumes on a limited number of the links, where a link corresponds to the state x of M(?), and
the traffic volume along x corresponds to f (x) of Eq. (5). The traffic volumes along the observable
links were reliably estimated from real-world web-camera images captured in Nairobi, Kenya [2,
7
(B)
2.5
2
Proposed method
Proposed method with no use of g
Nadaraya?Watson kernel regression
?1.26
?1.265
(C)
I
1
10
?1.275
?1.28
?1.285
0
10
?1.295
?1.3
?1.305
1
0.5
36.8
36.81
36.82
36.83
36.84
10 ?1
10
36.85
0
1
10
10
Observation
?1.26
?1.265
0
0
(RMAE: 1.01 ? 0.917)
?1
?1.31
36.79
Proposed method
II
1
?1.27
30
60
# of observation states
90
10
?1.275
Estimation
RMAE
?1.29
1.5
NWKR
?1.27
Estimation
(A)
?1.28
?1.285
?1.29
0
10
?1.295
?1.3
?1
?1.305
?1.31
36.79
36.8
36.81
36.82
36.83
36.84
36.85
10 ?1
10
(RMAE: 0.517 ? 0.669)
0
1
10
10
Observation
Figure 2: (A) Comparison of RMAE for the synthetic task between our methods and the NWKR
(baseline method). (B) Traffic volumes for a city center map in Nairobi, Kenya, I: Web-camera
observations (colored), II: Estimated traffic volumes by our method. (C) Comparison between the
NWKR and our method for the real traffic-volume prediction problem.
15], while we did not use the hitting rate g(x, x? ) here because of its unavailability. Note that this task
is similar to network tomography [27, 30] or link-cost prediction [32, 14]. However, unlike network
tomography, we need to infer all of the link traffics instead of source-destination demands. Unlike
link-cost prediction, our inputs are stationary observations instead of trajectories. Again, we use the
NMKR as the baseline method. The road network and the web-camera observations are shown in
Fig. 2 (B)-I. While the total number of links was 1, 497, the number of links with observations was
only 52 (about 3.5%). We used the parametric models in Section 5, where ?T (x) ? [?1, 1] was set
based on the road category of x such that primary roads have a higher value than secondary roads
[22], and ?(x, x? ) ? [?1, 1] was the cosine of the angle between x and x? . However, we omitted the
terms of ?I (x) in Eq. (17).
Figure 2 (B)-II shows an example of our results, where the red and yellow roads are most congested
while the traffic on the blue roads is flowing smoothly. The congested roads from our analysis
are consistent with those from a local traffic survey report [13]. Figure 2 (C) shows comparison
between predicted and observed travel volumes. In the figures, the 45o line corresponds to perfect
agreement between the actual and predicted values. To evaluate accuracy, we employed the leaveone-out cross-validation. We can see that the proposed method gives a good performance. This is
rather surprising, because the rate of observation links is very limited to only 3.5 percent.
7
Conclusion
We have defined a novel inverse problem of a Markov chain, where we infer the probabilities about
the initial states and the transitions, using a limited amount of information that we can obtain by
observing the Markov chain at a small number of states. We have proposed an effective objective
function for this problem as well as an algorithm based on natural gradient.
Using real-world data, we have demonstrated that our approach is useful for a traffic monitoring
system that monitors the traffic volume at limited number of locations. From this observation the
Markov chain model is inferred, which in turn can be used to deduce the traffic volume at any
location. Surprisingly, even when the observations are made at only several percent of the locations,
the proposed method can successfully infer the traffic volume at unobserved locations.
Further analysis of the proposed method is necessary to better understand its property and effectiveness. In particular, our future work includes an analysis of model identifiability and empirical
studies with other applications, such as logistics and economic system modeling.
Acknowledgments
The authors thank Dr. R. Morris, Dr. R. Raymond, and Mr. T. Katsuki for fruitful discussion.
8
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. of International Conference on Machine learning, 2004.
[2] AccessKenya.com. http://traffic.accesskenya.com/.
[3] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[4] S. Amari and H. Nagaoka. Method of Information Geometry. Oxford University Press, 2000.
[5] S. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for multilayer
perceptrons. Neural Computation, 12(6):1399?1409, 2000.
[6] H. Balzter. Markov chain models for vegetation dynamics. Ecological Modelling, 126(2-3):139?154,
2000.
[7] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence
Research, 15:319?350, 2001.
[8] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[9] H. H. Bui, S. Venkatesh, and G. West. On the recognition of abstract Markov policies. In Proc. of AAAI
Conference on Artificial Intelligence, pages 524?530, 2000.
[10] S. L. Chang, L. S. Chen, Y. C. Chung, and S. W. Chen. Automatic license plate recognition. In Proc. of
IEEE Transactions on Intelligent Transportation Systems, pages 42?53, 2004.
[11] E. Crisostomi, S. Kirkland, and R. Shorten. A Google-like model of road network dynamics and its
application to regulation and control. International Journal of Control, 84(3):633?651, 1995.
[12] M. Gamon and A. C. K?onig. Navigation patterns from and to social media. In Proc. of AAAI Conference
on Weblogs and Social Media, 2009.
[13] J. E. Gonzales, C. C. Chavis, Y. Li, and C. F. Daganzo. Multimodal transport in Nairobi, Kenya: Insights
and recommendations with a macroscopic evidence-based model. In Proc. of Transportation Research
Board 90th Annual Meeting, 2011.
[14] T. Id?e and M. Sugiyama. Trajectory regression on road networks. In Proc. of AAAI Conference on
Artificial Intelligence, pages 203?208, 2011.
[15] T. Katasuki, T. Morimura, and T. Id?e. Bayesian unsupervised vehicle counting. In Technical Report. IBM
Research, RT0951, 2013.
[16] D. Levin, Y. Peres, and E. Wilmer. Markov Chains and Mixing Times. American Mathematical Society,
2008.
[17] D. MacKay. Information theory, inference, and learning algorithms. Cambridge University Press, 2003.
[18] T. Morimura and S. Kato. Statistical origin-destination generation with multiple sources. In Proc. of
International Conference on Pattern Recognition, pages 283?290, 2012.
[19] T. Morimura, E. Uchibe, J. Yoshimoto, and K. Doya. A generalized natural actor-critic algorithm. In
Proc. of Advances in Neural Information Processing Systems, volume 22, 2009.
[20] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. of International Conference on Machine Learning, 2000.
[21] J. R. Norris. Markov Chains. Cambridge University Press, 1998.
[22] OpenStreetMap. http://wiki.openstreetmap.org/.
[23] C. C. Pegels and A. E. Jelmert. An evaluation of blood-inventory policies: A Markov chain application.
Operations Research, 18(6):1087?1098, 1970.
[24] J.A. Quinn and R. Nakibuule. Traffic flow monitoring in crowded cities. In Proc. of AAAI Spring Symposium on Artificial Intelligence for Development, 2010.
[25] C. M. Roberts. Radio frequency identification (RFID). Computers & Security, 25(1):18?26, 2006.
[26] S. M. Ross. Stochastic processes. John Wiley & Sons Inc, 1996.
[27] S. Santini. Analysis of traffic flow in urban areas using web cameras. In Proc. of IEEE Workshop on
Applications of Computer Vision, pages 140?145, 2000.
[28] R. R. Sarukkai. Link prediction and path analysis using Markov chains. Computer Networks, 33(16):377?386, 2000.
[29] G. Tauchen. Finite state Markov-chain approximations to univariate and vector autoregressions. Economics Letters, 20(2):177?181, 1986.
[30] Y. Zhang, M. Roughan, C. Lund, and D. Donoho. An information-theoretic approach to traffic matrix estimation. In Proc. of Conference on Applications, technologies, architectures, and protocols for computer
communications, pages 301?312. ACM, 2003.
[31] J. Zhu, J. Hong, and J. G. Hughes. Using Markov chains for link prediction in adaptive Web sites. In
Proc. of Soft-Ware 2002: Computing in an Imperfect World, volume 2311, pages 60?73. Springer, 2002.
[32] B. D. Ziebart, A. L. Maas, and A. K. Dey J. A. Bagnell. Maximum entropy inverse reinforcement learning.
In Proc. of AAAI Conference on Artificial Intelligence, pages 1433?1438, 2008.
9
| 5122 |@word version:2 logit:1 d2:2 seek:1 ld:10 initial:14 loc:6 score:2 past:1 existing:1 current:2 com:5 surprising:1 si:4 john:1 update:3 rd2:1 stationary:13 intelligence:5 device:1 accordingly:1 steepest:2 realizing:1 colored:1 readability:1 location:5 preference:3 org:1 simpler:1 zhang:1 mathematical:1 along:3 differential:1 driver:2 symposium:1 prove:2 apprenticeship:1 x0:4 multi:1 actual:1 increasing:1 becomes:2 provided:1 estimating:1 notation:4 bounded:1 xx:5 medium:2 eigenvector:1 unobserved:2 impractical:1 every:5 tackle:1 unwanted:1 control:2 onig:1 positive:4 local:4 limit:1 taxi:1 ak:2 id:12 analyzing:5 oxford:1 ware:1 path:6 might:2 rmae:6 limited:10 nadaraya:2 decided:1 practical:1 camera:6 unique:2 acknowledgment:1 recursive:1 hughes:1 definite:3 implement:1 procedure:1 area:1 empirical:1 convenient:1 road:19 get:3 cannot:1 context:3 equivalent:1 map:1 demonstrated:1 center:2 maximizing:2 fruitful:1 transportation:2 economics:1 independently:2 ergodic:1 formulate:4 survey:1 shorten:1 rule:2 insight:1 d1:2 regarded:1 orthonormal:1 kenya:4 notion:3 analogous:1 limiting:1 congested:2 pt:28 massive:1 gps:1 us:1 origin:2 agreement:1 element:8 recognition:5 expensive:2 updating:1 observed:4 solved:1 region:1 connected:2 went:2 russell:1 expecting:1 ziebart:1 dynamic:5 depend:1 solving:3 segment:4 multimodal:1 joint:1 represented:4 describe:1 effective:1 artificial:6 whose:5 quite:1 otherwise:2 amari:3 statistic:1 nagaoka:1 sequence:1 differentiable:2 eigenvalue:1 propose:2 remainder:1 kato:1 mixing:2 intuitive:1 incremental:1 perfect:1 derive:4 measured:2 eq:35 predicted:2 implies:1 come:1 direction:2 aperiodic:1 tokyo:2 modifying:1 stochastic:3 public:1 implementing:1 abbeel:1 preliminary:2 proposition:4 weblogs:1 lkl:4 normal:2 exp:4 major:2 adopt:1 omitted:2 estimation:4 proc:14 travel:1 radio:2 ross:1 city:7 tool:1 weighted:1 successfully:1 minimization:2 fukumizu:1 clearly:1 gaussian:2 modified:1 reaching:4 rather:1 avoid:1 varying:1 properly:1 modelling:1 indicates:1 likelihood:2 mainly:1 contrast:1 baseline:3 sense:1 inference:1 stopping:1 tsuyoshi:1 arg:1 morimura:4 priori:2 development:1 mackay:1 equal:5 having:1 ng:4 hop:1 represents:2 park:1 rfid:2 unsupervised:1 future:1 minimized:1 others:1 report:2 intelligent:1 irreducible:1 randomly:2 divergence:4 replaced:1 geometry:1 consisting:1 evaluation:1 navigation:1 chain:56 partial:5 necessary:1 lh:7 euclidean:1 taylor:1 logarithm:1 instance:1 column:2 modeling:2 soft:1 markovian:1 ordinary:4 cost:8 deviation:1 levin:1 dir:2 synthetic:3 st:4 international:4 sensitivity:2 stay:1 probabilistic:1 destination:2 connecting:1 squared:5 reflect:1 satisfied:1 again:1 aaai:5 dr:2 american:1 chung:1 li:1 converted:1 availability:1 includes:1 crowded:1 inc:1 satisfy:1 vehicle:9 view:1 closed:1 analyze:1 traffic:35 portion:1 red:1 observing:1 option:1 identifiability:1 contribution:1 minimize:1 accuracy:1 efficiently:5 yellow:1 identification:2 bayesian:1 monitoring:7 trajectory:2 straight:1 plateau:1 simultaneous:1 evaluates:1 frequency:6 associated:1 riemannian:7 roughan:1 sampled:1 subsection:1 car:2 organized:1 obtainable:1 higher:1 methodology:1 flowing:1 formulation:1 evaluated:1 though:1 dey:1 stage:1 until:1 correlation:1 hand:2 web:7 transport:1 propagation:1 google:1 quality:2 building:1 effect:2 normalized:1 true:3 regularization:2 analytically:1 hence:3 leibler:1 white:2 conditionally:1 unavailability:1 yoshimoto:1 steady:1 cosine:1 hong:1 generalized:1 plate:2 theoretic:1 demonstrate:1 percent:2 image:2 wise:1 consideration:1 novel:1 sigmoid:1 jp:2 volume:18 discussed:1 vegetation:1 ecosystem:1 synthesized:2 refer:2 cambridge:2 automatic:2 rd:11 similarly:1 sugiyama:1 moving:1 actor:1 glo:9 add:1 deduce:1 route:1 ecological:1 watson:3 meeting:1 santini:1 tide:1 captured:1 minimum:1 mr:1 employed:1 ii:3 multiple:1 infer:5 technical:1 characterized:1 cross:3 divided:1 equally:1 prediction:5 variant:2 basic:2 regression:4 multilayer:1 vision:1 metric:6 iteration:1 represent:2 limt:1 normalization:1 kernel:3 addition:1 addressed:1 decreased:1 median:1 country:1 source:2 macroscopic:1 biased:1 rest:1 unlike:3 limk:5 subject:1 incorporates:1 flow:2 effectiveness:2 space3:1 near:1 counting:1 ideal:1 baxter:1 architecture:1 kirkland:1 economic:2 idea:1 imperfect:1 openstreetmap:2 whether:1 expression:2 fim:3 bartlett:1 linkage:1 generally:2 useful:1 amount:1 morris:1 tomography:2 category:1 continuation:1 specifies:4 http:2 exist:2 wiki:1 estimated:5 blue:1 discrete:1 indefinite:1 monitor:1 drawn:1 license:1 blood:1 urban:1 uchibe:1 fraction:1 sum:1 inverse:28 everywhere:1 angle:2 letter:1 place:1 throughout:1 family:2 doya:1 prefer:1 scaling:1 annual:1 min:1 spring:1 developing:1 according:1 combination:1 across:1 terminates:1 son:1 osogami:2 evolves:1 gamon:1 pr:10 xo:35 ln:1 equation:4 turn:3 needed:1 end:1 available:5 operation:1 rewritten:2 probe:2 apply:1 appropriate:2 differentiated:1 quinn:1 alternative:2 altogether:1 original:1 assumes:2 denotes:4 include:4 cf:1 dirichlet:1 society:1 unchanged:2 objective:7 added:1 parametric:3 primary:2 traditional:3 diagonal:2 visiting:4 bagnell:1 gradient:20 link:15 thank:1 gonzales:1 restart:3 manifold:4 unstable:1 trivial:1 length:1 modeled:1 ratio:4 balance:2 minimizing:1 difficult:1 unfortunately:1 regulation:1 robert:1 relate:1 implementation:2 reliably:2 policy:3 unknown:3 takayuki:1 observation:26 markov:58 finite:2 descent:2 logistics:1 peres:1 communication:1 arbitrary:2 inferred:2 introduced:1 venkatesh:1 namely:1 pair:1 specified:2 trip:2 optimized:2 kl:4 security:1 established:1 address:1 usually:2 pattern:3 lund:1 including:5 max:1 natural:10 regularized:3 indicator:1 zhu:1 representing:2 technology:1 raymond:1 literature:2 autoregressions:1 tangent:1 relative:2 fully:1 generation:1 proportional:1 validation:3 agent:2 consistent:1 pi:18 critic:1 ibm:7 maas:1 surprisingly:1 wilmer:1 side:2 understand:1 fall:1 wide:1 taking:1 absolute:2 leaveone:1 distributed:1 transition:24 world:6 author:1 made:1 reinforcement:4 adaptive:2 social:4 transaction:1 restarting:1 approximate:1 observable:3 preferred:1 kullback:1 bui:1 global:3 triplet:1 expansion:1 inventory:1 complex:1 protocol:1 diag:11 did:1 main:1 noise:3 tetsuro:2 fair:1 body:1 x1:1 fig:2 west:1 site:1 board:1 wiley:1 gradientdescent:1 formula:1 transitioning:1 xt:4 bishop:1 evidence:1 exists:5 consist:1 workshop:1 horizon:2 demand:1 chen:2 rd1:4 intersection:1 logarithmic:3 smoothly:1 entropy:1 univariate:1 hitting:10 recommendation:1 chang:1 applies:1 springer:2 norris:1 corresponds:5 minimizer:1 satisfies:2 acm:1 goal:3 targeted:1 identity:1 donoho:1 fisher:1 content:2 specifically:2 determined:3 infinite:1 called:4 total:4 secondary:1 perceptrons:1 rarely:1 internal:1 evaluate:1 tested:1 ex:5 |
4,557 | 5,123 | Robust Data-Driven Dynamic Programming
Daniel Kuhn
?cole Polytechnique F?d?rale de Lausanne
CH-1015 Lausanne, Switzerland
[email protected]
Grani A. Hanasusanto
Imperial College London
London SW7 2AZ, UK
[email protected]
Abstract
In stochastic optimal control the distribution of the exogenous noise is typically
unknown and must be inferred from limited data before dynamic programming
(DP)-based solution schemes can be applied. If the conditional expectations in the
DP recursions are estimated via kernel regression, however, the historical sample
paths enter the solution procedure directly as they determine the evaluation points
of the cost-to-go functions. The resulting data-driven DP scheme is asymptotically
consistent and admits an efficient computational solution when combined with
parametric value function approximations. If training data is sparse, however, the
estimated cost-to-go functions display a high variability and an optimistic bias,
while the corresponding control policies perform poorly in out-of-sample tests. To
mitigate these small sample effects, we propose a robust data-driven DP scheme,
which replaces the expectations in the DP recursions with worst-case expectations
over a set of distributions close to the best estimate. We show that the arising minmax problems in the DP recursions reduce to tractable conic programs. We also
demonstrate that the proposed robust DP algorithm dominates various non-robust
schemes in out-of-sample tests across several application domains.
1
Introduction
We consider a stochastic optimal control problem in discrete time with continuous state and action
spaces. At any time t the state of the underlying system has two components. The endogenous state
st ? Rd1 captures all decision-dependent information, while the exogenous state ?t ? Rd2 captures
the external random disturbances. Conditional on (st , ?t ) the decision maker chooses a control
action ut ? Ut ? Rm and incurs a cost ct (st , ?t , ut ). From time t to t + 1 the system then migrates
to a new state (st+1 , ?t+1 ). Without much loss of generality we assume that the endogenous state
obeys the recursion st+1 = gt (st , ut , ?t+1 ), while the evolution of the exogenous state can be
modeled by a Markov process. Note that even if the exogenous state process has finite memory, it
can be reduced as an equivalent Markov process on a higher-dimensional space. Thus, the Markov
assumption is unrestrictive for most practical purposes. By Bellman?s principle of optimality, a
decision maker aiming to minimize the expected cumulative costs solves the dynamic program
Vt (st , ?t ) = min
ut ?Ut
s. t.
ct (st , ?t , ut ) + E[Vt+1 (st+1 , ?t+1 )|?t ]
(1)
st+1 = gt (st , ut , ?t+1 )
backwards for t = T, . . . , 1 with VT +1 ? 0; see e.g. [1]. The cost-to-go function Vt (st , ?t ) quantifies the minimum expected future cost achievable from state (st , ?t ) at time t.
Stochastic optimal control has numerous applications in engineering and science, e.g. in supply
chain management, power systems scheduling, behavioral neuroscience, asset allocation, emergency
service provisioning, etc. [1, 2]. There is often a natural distinction between endogenous and exogenous states. For example, in inventory control the inventory level can naturally be interpreted as the
endogenous state, while the uncertain demand represents the exogenous state.
1
In spite of their exceptional modeling power, dynamic programming problems of the above type
suffer from two major shortcomings that limit their practical applicability. First, the backward induction step (1) is computationally burdensome due to the intractability to evaluate the cost-to-go
function Vt for the continuum of all states (st , ?t ), the intractability to evaluate the multivariate
conditional expectations and the intractability to optimize over the continuum of all control actions
ut [2]. Secondly, even if the dynamic programming recursions (1) could be computed efficiently,
there is often substantial uncertainty about the conditional distribution of ?t+1 given ?t . Indeed,
the distribution of the exogenous states is typically unknown and must be inferred from historical
observations. If training data is sparse?as is often the case in practice?it is impossible to estimate
this distribution reliably. Thus, we lack essential information to evaluate (1) in the first place.
In this paper, we assume that only a set of N sample trajectories of the exogenous state is given,
and we use kernel regression in conjunction with parametric value function approximations to estimate the conditional expectation in (1). Thus, we approximate the conditional distribution of ?t+1
given ?t by a discrete distribution whose discretization points are given by the historical samples,
while the corresponding conditional probabilities are expressed in terms of a normalized NadarayaWatson (NW) kernel function. This data-driven dynamic programming (DDP) approach is conceptually appealing and avoids an artificial separation of estimation and optimization steps. Instead, the
historical samples are used directly in the dynamic programming recursions. It is also asymptotically consistent in the sense that the true conditional expectation is recovered when N grows [3].
Moreover, DDP computes the value functions only on the N sample trajectories of the exogenous
state, thereby mitigating one of the intractabilities of classical dynamic programming.
Although conceptually and computationally appealing, DDP-based policies exhibit a poor performance in out-of-sample tests if the training data is sparse. In this case the estimate of the conditional
expectation in (1) is highly noisy (but largely unbiased). The estimate of the corresponding costto-go value inherits this variability. However, it also displays a downward bias caused by the minimization over ut . This phenomenon is reminiscent of overfitting effects in statistics. As estimation
errors in the cost-to-go functions are propagated through the dynamic programming recursions, the
bias grows over time and thus incentivizes poor control decisions in the early time periods.
The detrimental overfitting effects observed in DDP originate from ignoring distributional uncertainty: DDP takes the estimated discrete conditional distribution of ?t+1 at face value and ignores
the possibility of estimation errors. In this paper we propose a robust data-driven dynamic programming (RDDP) approach that replaces the expectation in (1) by a worst-case expectation over
a set of distributions close to the nominal estimate in view of the ?2 -distance. We will demonstrate that this regularization reduces both the variability and the bias in the approximate cost-to-go
functions and that RDDP dominates ordinary DDP as well as other popular benchmark algorithms
in out-of-sample tests. Leveraging on recent results in robust optimization [4] and value function
approximation [5] we will also show that the nested min-max problems arising in RDDP typically
reduce to conic optimization problems that admit efficient solution with interior point algorithms.
Robust value iteration methods have recently been studied in robust Markov decision process (MDP)
theory [6, 7, 8, 9]. However, these algorithms are not fundamentally data-driven as their primitives
are uncertainty sets for the transition kernels instead of historical observations. Moreover, they
assume finite state and action spaces. Data-driven approaches to dynamic decision making are routinely studied in approximate dynamic programming and reinforcement learning [10, 11, 12], but
these methods are not robust (in a worst-case sense) with respect to distributional uncertainty and
could therefore be susceptible to overfitting effects. The robust value iterations in RDDP are facilitated by combining convex parametric function approximation methods (to model the dependence
on the endogenous state) with nonparametric kernel regression techniques (for the dependence on
the exogenous state). This is in contrast to most existing methods, which either rely exclusively
on parametric function approximations [10, 11, 13] or nonparametric ones [12, 14, 15, 16]. Due
to the convexity in the endogenous state, RDDP further benefits from mathematical programming
techniques to optimize over high-dimensional continuous action spaces without requiring any form
of discretization.
Notation. We use lower-case bold face letters to denote vectors and upper-case bold face letters
to denote matrices. We define 1 ? Rn as the vector with all elements equal to 1, while ? = {p ?
Rn+ : 1| p = 1} denotes the probability simplex in Rn . The dimensions of 1 and ? will usually be
clear from the context. The space of symmetric matrices of dimension n is denoted by Sn . For any
two matrices X, Y ? Sn , the relation X < Y implies that X ? Y is positive semidefinite.
2
2
Data-driven dynamic programming
Assume from now on that the distribution of the exogenous states is unknown and that we are only
given N observation histories {?ti }Tt=1 for i = 1, . . . , N . This assumption is typically well justified
in practice. In this setting, the conditional expectation in (1) cannot be evaluated exactly. However
it can be estimated, for instance, via Nadaraya-Watson (NW) kernel regression [17, 18].
E[Vt+1 (st+1 , ?t+1 )|?t ] ?
N
X
i
qti (?t )Vt+1 (sit+1 , ?t+1
)
(2)
i=1
The conditional probabilities in (2) are set to
KH (?t ? ?ti )
,
qti (?t ) = PN
k
k=1 KH (?t ? ?t )
1
(3)
1
where the kernel function KH (?) = |H|? 2 K(|H|? 2 ?) is defined in terms of a symmetric multivariate density K and a positive definite bandwidth matrix H. For a large bandwidth, the conditional
probabilities qti (?t ) converge to N1 , in which case (2) reduces to the (unconditional) sample average. Conversely, an extremely small bandwidth causes most of the probability mass to be assigned
to the sample point closest to ?t . In the following we set the bandwidth matrix H to its best estimate assuming that the historical observations {?ti }N
i=1 follow a Gaussian distribution; see [19].
Substituting (2) into (1), results in the data-driven dynamic programming (DDP) formulation
Vtd (st , ?t ) = min
ut ?Ut
s. t.
ct (st , ?t , ut ) +
sit+1
=
N
X
d
i
qti (?t )Vt+1
(sit+1 , ?t+1
)
i=1
i
gt (st , ut , ?t+1
)
(4)
?i ,
VTd+1
? 0. The idea to use kernel-based approximations to estimate the
with terminal condition
expected future costs is appealing due to its simplicity. Such approximations have been studied, for
example, in the context of stochastic optimization with state observation [20]. However, to the best
of our knowledge they have not yet been used in a fully dynamic setting?maybe for the reasons to be
outlined in ? 3. On the positive side, DDP with NW kernel regression is asymptotically consistent for
large N under a suitable scaling of the bandwidth matrix and under a mild boundedness assumption
d
on Vt+1
[3]. Moreover, DDP evaluates the cost-to-go function of the next period only at the sample
points and thus requires no a-priori discretization of the exogenous state space, thus mitigating one
of the intractabilities of classical dynamic programming.
3
Robust data-driven dynamic programming
If the training data is sparse, the NW estimate (2) of the conditional expectation in (4) typically
exhibits a small bias and a high variability. Indeed, the variance of the estimator scales with ?O( N1 )
[21]. The DDP value function Vtd inherits this variability. However, it also displays a significant
optimistic bias. The following stylized example illustrates this phenomenon.
|
Example 3.1 Assume that d1 = 1, d2 = m = 5, ct (st , ?t , ut ) = 0, gt (st , ut , ?t+1 ) = ?t+1
ut ,
1 2
Ut = {u ? Rm : 1| u = 1} and Vt+1 (st+1 , ?t+1 ) = 10
st+1 ? st+1 . In order to facilitate a
controlled experiment, we also assume that (?t , ?t+1 ) follows a multivariate Gaussian distribution,
where each component has unit mean and variance. The correlation between ?t,k and ?t+1,k is set
to 30%. All other correlations are zero. Our aim is to solve (1) and to estimate Vt (st , ?t ) at ?t = 1.
By permutation symmetry, the optimal decision under full distributional knowledge is u?t = 15 1.
An analytical calculation then yields the true cost-to-go value Vt (st , 1) = ?0.88. In the following
we completely ignore our distributional knowledge. Instead, we assume that only N independent
i
samples (?ti , ?t+1
) are given, i = 1, . . . , N . To showcase the high variability of NW estimation,
we fix the decision u?t and use (2) to estimate its expected cost conditional on ?t = 1. Figure 1
(left) shows that this estimator is unbiased but fluctuates within ?5% around its median even for
N = 500. Next, we use (4) to estimate Vtd (st , 1), that is, the expected cost of the best decision
obtained without distributional information. Figure 1 (middle) shows that this cost estimator is even
more noisy than the one for a fixed decision, exhibits a significant downward bias and converges
slowly as N grows.
3
Estimated cost of true optimal decision
?0.9
?0.95
100
200
N
True optimal cost
10th & 90th percentiles
Median
300
400
500
?0.85
?1
?1.1
?1.2
Estimated cost of RDDP decision
?0.8
?0.9
Cost
Cost
?0.85
?1
Estimated cost of DDP decision
?0.8
Cost
?0.8
100
200
N
True optimal cost
10th & 90th percentiles
Median
300
400
500
?0.9
?0.95
?1
100
200
N
True optimal cost
10th & 90th percentiles
Median
300
400
500
Figure 1: Estimated costs of true optimal and data-driven decisions. Note the different scales. All
reported values represent averages over 200 independent simulation runs.
The downward bias in Vtd as an estimator for the true value function Vt is the consequence of an
d
overfitting effect, which can be explained as follows. Setting Vt+1 ? Vt+1
, we find
d
Vt (st , ?t ) = min ct (st , ?t , ut ) + E[Vt+1
(gt (st , ut , ?t+1 ), ?t+1 )|?t ]
ut ?Ut
N
X
d
i
i
? min ct (st , ?t , ut ) + E[
qti (?t )Vt+1
(gt (st , ut , ?t+1
), ?t+1
)|?t ]
ut ?Ut
i=1
N
i
h
X
d
i
i
qti (?t )Vt+1
(gt (st , ut , ?t+1
), ?t+1
)?t .
? E min ct (st , ?t , ut ) +
ut ?Ut
i=1
The relation in the second line uses our observation that the NW estimator of the expected cost
associated with any fixed decision ut is approximately unbiased. Here, the expectation is with
respect to the (independent and identically distributed) sample trajectories used in the NW estimator.
The last line follows from the conditional Jensen inequality. Note that the expression inside the
conditional expectation coincides with Vtd (st , ?t ). This argument suggests that Vtd (st , ?t ) must
indeed underestimate Vt (st , ?t ) on average. We emphasize that all systematic estimation errors of
this type accumulate as they are propagated through the dynamic programming recursions.
To mitigate the detrimental overfitting effects, we propose a regularization that reduces the decision
maker?s overconfidence in the weights qt (?t ) = [qt1 (?t ) . . . qtN (?t )]| . Thus, we allow the conditional probabilities used in (4) to deviate from their nominal values qt (?t ) up to a certain degree.
This is achieved by considering uncertainty sets ? (q) that contain all weight vectors sufficiently
close to some nominal weight vector q ? ? with respect to the ?2 -distance for histograms.
? (q) = {p ? ? :
N
X
(pi ? qi )2 /pi ? ?}
(5)
i=1
The ?2 -distance belongs to the class of ?-divergences [22], which also includes the Kullback-Leibler
distances. Our motivation for using uncertainty sets of the type (5) is threefold. First, ?(q) is
determined by a single size parameter ?, which can easily be calibrated, e.g., via cross-validation.
Secondly, the ?2 -distance guarantees that any distribution p ? ?(q) assigns nonzero probability to
all scenarios that have nonzero probability under the nominal distribution q. Finally, the structure of
?(q) implied by the ?2 -distance has distinct computational benefits that become evident in ? 4.
Allowing the conditional probabilities in (4) to range over the uncertainty set ?(qt (?t )) results in
the robust data-driven dynamic programming (RDDP) formulation
Vtr (st , ?t ) = min
ut ?Ut
s. t.
ct (st , ?t , ut ) +
max
p??(qt (?t ))
N
X
i=1
r
i
pi Vt+1
(sit+1 , ?t+1
)
(6)
i
sit+1 = gt (st , ut , ?t+1
) ?i
with terminal condition VTr +1 ? 0. Thus, each RDDP recursion involves the solution of a robust
optimization problem [4], which can be viewed as a game against ?nature? (or a malicious adversary):
for every action ut chosen by the decision maker, nature selects the corresponding worst-case weight
vector from within p ? ? (qt (?t )). By anticipating nature?s moves, the decision maker is forced
to select more conservative decisions that are less susceptible to amplifying estimation errors in the
nominal weights qt (?t ). The level of robustness of the RDDP scheme can be steered by selecting
4
the parameter ?. We suggest to choose ? large enough such that the envelope of all conditional
CDFs of ?t+1 implied by the weight vectors in ?(qt (?t )) covers the true conditional CDF with high
confidence (Figure 2). The following example illustrates the potential benefits of the RDDP scheme.
i
Example 3.2 Consider again Example 3.1. Assuming that only the samples {?ti , ?t+1
}N
i=1 are
known, we can compute a worst-case optimal decision using (6). Fixing this decision, we can then
use (2) to estimate its expected cost conditional on ?t = 1. Note that this cost is generically different
from Vtr (st , 1). Figure 1 (right) shows that the resulting cost estimator is less noisy and?perhaps
surprisingly?unbiased. Thus, it clearly dominates Vtd (st , 1) as an estimator for the true cost-to-go
value Vt (st , 1) (which is not accessible in reality as it relies on full distributional information).
Robust optimization models with uncertainty sets of the type (5) have previously been studied in [23,
24]. However, these static models are fundamentally different in scope from our RDDP formulation.
RDDP seeks the worst-case probabilities of N historical samples of the exogenous state, using the
NW weights as nominal probabilities. In contrast, the static models in [23, 24] rely on a partition
of the uncertainty space into N bins. Worst-case probabilities are then assigned to the bins, whose
nominal probabilities are given by the empirical frequencies. This latter approach does not seem to
extend easily to our dynamic setting as it would be unclear where in each bin one should evaluate
the cost-to-go functions.
Probability
Instead of immunizing the DDP scheme against estimation
1
errors in the conditional probabilities (as advocated here),
one could envisage other regularizations to mitigate the
overfitting phenomena. For instance, one could construct
i
an uncertainty set for (?t+1
)N
i=1 and seek control actions
that are optimal in view of the worst-case sample points
True CDF
Nadaraya?Watson CDF
within this set. However, this approach would lead to a
Envelope of implied CDFs
0
harder robust optimization problem, where the search space
of the inner maximization has dimension O(N d2 ) (as opposed to O(N ) for RDDP). Moreover, this approach would Figure 2: Envelope of all conditional
r
only be tractable if Vt+1
displayed a very regular (e.g., lin- CDFs implied by weight vectors in
ear or quadratic) dependence on ?t+1 . RDDP imposes no ?(qt (?t )).
such restrictions on the cost-to-go function; see ? 4.
4
Computational solution procedure
In this section we demonstrate that RDDP is computationally tractable under a convexity assumption
and if we approximate the dependence of the cost-to-go functions on the endogenous state through
a piecewise linear or quadratic approximation architecture. This result immediately extends to the
DDP scheme of ? 2 as the uncertainty set (5) collapses to a singleton for ? = 0.
Assumption 4.1 For all t = 1, . . . , T , the cost function ct is convex quadratic in (st , ut ), the
transition function gt is affine in (st , ut ), and the feasible set Ut is second-order conic representable.
Under Assumption 4.1, Vtr (st , ?t ) can be evaluated by solving a convex optimization problem.
r
Theorem 4.1 Suppose that Assumption 4.1 holds and that the cost-to-go function Vt+1
is convex in
the endogenous state. Then, (6) reduces to the following convex minimization problem.
Vtr (st , ?t ) = min ct (st , ?t , ut ) + ?? ? ? ? 2qt (?t )| y + 2?qt (?t )| 1
s. t. ut ? Ut ,
? ? R,
? ? R+ ,
z, y ? RN
r
i
i
Vt+1
(gt (st , ut , ?t+1
), ?t+1
) ? zi ?i
q
4yi2 + (zi + ?)2 ? 2? ? zi ? ? ?i
zi + ? ? ?,
(7)
Corollary 4.1 If Assumption 4.1 holds, then RDDP preserves convexity in the exogenous state.
r
Thus, Vtr (st , ?t ) is convex in st whenever Vt+1
(st+1 , ?t+1 ) is convex in st+1 .
r
Note that problem (7) becomes a tractable second-order cone program if Vt+1
is convex piecewise
linear or convex quadratic in st+1 . Then, it can be solved efficiently with interior point algorithms.
5
Algorithm 1: Robust data-driven dynamic programming
Inputs: Sample trajectories {skt }Tt=1 for k = 1, . . . , K;
+1
observation histories {?ti }Tt=1
for i = 1, . . . , N .
r
i
?
Initialization: Let VT +1 (?, ?T +1 ) be the zero function for all i = 1, . . . , N .
for all t = T, . . . , 1 do
for all i = 1, . . . , N do
for all k = 1, . . . , K do
j
r
r
Let V?t,k,i
be the optimal value of problem (7) with input V?t+1
(?, ?t+1
) ?j.
end for
r
Construct V?tr (?, ?ti ) from the interpolation points {(skt , V?t,k,i
)}K
k=1 as in (8a) or (8b).
end for
end for
Outputs: Approximate cost-to-go functions V?tr (?, ?ti ) for i = 1, . . . , N and t = 1, . . . , T .
We now describe an algorithm that computes all cost-to-go functions {Vtr }Tt=1 approximately. Initially, we collect historical observation trajectories of the exogenous state {?ti }Tt=1 , i = 1, . . . , N ,
and generate sample trajectories of the endogenous state {skt }Tt=1 , k = 1, . . . , K, by simulating the
evolution of st under a prescribed control policy along randomly selected exogenous state trajectories. Best results are achieved if the sample-generating policy is near-optimal. If no near-optimal
policy is known, an initial naive policy can be improved sequentially in a greedy fashion. The core
of the algorithm computes approximate value functions V?tr , which are piecewise linear or quadratic
r
in st , by backward induction on t. Iteration t takes V?t+1
as an input and computes the optimal value
r
V?t,k,i of the second-order cone program (7) for each sample state (skt , ?ti ). For any fixed i we then
r
construct the function V?tr (?, ?ti ) from the interpolation points {(skt , V?t,k,i
)}K
k=1 . If the endogenous
state is univariate (d1 = 1), the following piecewise linear approximation is used.
r
r
V?tr (st , ?ti ) = max(skt ? st )/(skt ? sk?1
)V?t,k?1,i
+ (st ? sk?1
)/(skt ? sk?1
)V?t,k,i
t
t
t
k
(8a)
In the multivariate case (d1 > 1), we aim to find the convex quadratic function V?tr (st , ?ti ) =
s|t Mi st + 2m|i st + mi that best explains the given interpolation points in a least-squares sense.
This quadratic function can be computed efficiently by solving the following semidefinite program.
i2
XK h
r
min
(skt )| Mi skt + 2m|i skt + mi ? V?t,k,i
k=1
(8b)
s. t. Mi ? Sd1 , Mi < 0, mi ? Rd1 , mi ? R
Quadratic approximation architectures of the above type first emerged in approximate dynamic programming [5]. Once the function V?tr (?, ?ti ) is computed for all i = 1, . . . , N , the algorithm proceeds
to iteration t ? 1. A summary of the overall procedure is provided in Algorithm 1.
Remark 4.1 The RDDP algorithm remains valid if the feasible set Ut depends on the state (st , ?t )
or if the control action ut includes components that are of the ?here-and-now?-type (i.e., they are
chosen before ?t+1 is observed) as well as others that are of the ?wait-and-see?-type (i.e., they are
chosen after ?t+1 has been revealed). In this setting, problem (7) becomes a two-stage stochastic
program [25] but remains efficiently solvable as a second-order cone program.
5
Experimental results
We evaluate the RDDP algorithm of ? 4 in the context of an index tracking and a wind energy
commitment application. All semidefinite programs are solved with SeDuMi [26] by using the
Yalmip [27] interface, while all linear and second-order cone programs are solved with CPLEX.
5.1
Index tracking
The objective of index tracking is to match the performance of a stock index as closely as possible
with a portfolio of other financial instruments. In our experiment, we aim to track the S&P 500
6
1
LSPI
5.692
11.699
14.597
126.712
DDP
4.697
15.067
9.048
157.201
RDDP
1.285
2.235
2.851
18.832
Probability
Statistic
Mean
Std. dev.
90th prct.
Worst case
0.8
0.6
0.4
0.2
0
0
Table 1: Out-of-sample statistics of sum of
squared tracking errors in h.
5
10
15
20
25
Sum of squared tracking errors (in ?)
LSPI
DDP
RDDP
30
Figure 3: Cumulative distribution function of
sum of squared tracking errors.
index with a combination of the NASDAQ Composite, Russell 2000, S&P MidCap 400, and AMEX
Major Market indices. We set the planning horizon to T = 20 trading days (1 month).
Let st ? R+ be the value of the current tracking portfolio relative to the value of S&P 500 on day t,
while ?t ? R5+ denotes the vector of the total index returns (price relatives) from day t ? 1 to day t.
The first component of ?t represents the return of S&P 500. The objective of index tracking is to
maintain st close to 1 in a least-squares sense throughout the planning horizon, which gives rise to
the following dynamic program with terminal condition VT +1 ? 0.
Vt (st , ?t ) =
min
(1 ? st )2 + E[Vt (st+1 , ?t+1 )|?t ]
s. t. u ? R5+ ,
1| u = st ,
u1 = 0,
st+1 = ?t+1 | u/?t+1,1
(9)
Here, ui /st can be interpreted as the portion of the tracking portfolio that is invested in index i on
day t. Our computational experiment is based on historical returns of the indices over 5440 days
from 26-Aug-1991 to 8-Mar-2013 (272 trading months). We solve the index tracking problem using
the DDP and RDDP algorithms (i.e., the algorithm of ? 4 with ? = 0 and ? = 10, respectively)
as well as least-squares policy iteration (LSPI) [10]. As the endogenous state is univariate, DDP
and RDDP employ the piecewise linear approximation architecture (8a). LSPI solves an infinitehorizon variant of problem (9) with discount factor ? = 0.9, polynomial basis features of degree
3 and a discrete action space comprising 1,000 points sampled uniformly from the true continuous
action space. We train the algorithms on the first 80 and test on the remaining 192 trading months.
Table 1 reports several out-of-sample statistics of the sum of squared tracking errors. We find that
RDDP outperforms DDP and LSPI by a factor of 4-5 in view of the mean, the standard deviation
and the 90th percentile of the error distribution, and it outperforms the other algorithms by an order
of magnitude in view of the worst-case (maximum) error. Figure 3 further shows that the error
distribution generated by RDDP stochastically dominates those generated by DDP and LSPI.
5.2
Wind energy commitment
Next, we apply RDDP to the wind energy commitment problem proposed in [28, 29]. On every
day t, a wind energy producer chooses the energy commitment levels xt ? R24
+ for the next 24
Statistic
Persistence
DDP
RDDP
Mean
Std. dev.
NC
10th prct.
Worst case
4.039
3.964
0.524
-11.221
4.698
6.338
-1.463
-22.666
7.549
5.133
1.809
0.481
Mean
Std. dev.
OH
10th prct.
Worst case
2.746
3.428
0.154
-12.065
4.104
5.548
0.118
-21.317
5.510
4.500
1.395
0.280
1
0.8
Probability
Site
0.6
0.4
0.2
0
?20
?10
0
10
Profit (in $100,000)
Persistence
DDP
RDDP
20
30
Table 2: Out-of-sample statistics of profit (in Figure 4: Out-of-sample profit distribution for
$100,000).
the North Carolina site.
7
hours. The day-ahead prices ?t ? R24
+ per unit of energy committed are known at the beginning
of the day. However, the hourly amounts of wind energy ?t+1 ? R24
+ generated over the day
are uncertain. If the actual production falls short of the commitment levels, there is a penalty of
twice the respective day-ahead price for each unit of unsatisfied demand. The wind energy producer
also operates three storage devices indexed by l ? {1, 2, 3}, each of which can have a different
capacity sl , hourly leakage ?l , charging efficiency ?lc and discharging efficiency ?ld . We denote
by slt+1 ? R24
+ the hourly filling levels of storage l over the next 24 hours. The wind producer?s
objective is to maximize the expected profit over a short-term planning horizon of T = 7 days.
The endogenous state is given by the storage levels at the end of day t, st = {slt,24 }3l=1 ? R3+ , while
the exogenous state comprises the day-ahead prices ?t ? R24
+ and the wind energy production levels
?t ? R24
of
day
t
?
1,
which
are
revealed
to
the
producer
on day t. Thus, we set ?t = (?t , ?t ).
+
The best bidding and storage strategy can be found by solving the dynamic program
Vt (st , ?t ) = max ?t| xt ? 2?t| E[eut+1 |?t ] + E[Vt+1 (st+1 , ?t+1 )|?t ]
s. t.
{c,w,u}
xt , et+1
?t+1,h =
? R24
+,
ect+1,h
xt,h = ect+1,h +
{+,?},l
et+1
, slt+1 ? R24
+
?l
+,1
+,2
+,3
+ et+1,h
+ et+1,h
+ et+1,h
+ ew
?h
t+1,h
?,1
?,2
?,3
u
et+1,h + et+1,h + et+1,h + et+1,h ?h
(10)
1 ?,l
e
, slt+1,h ? sl ?h, l
?ld t+1,h
with terminal condition VT +1 ? 0. Here, we adopt the convention that slt+1,0 = slt,24 for all l.
Besides the usual here-and-now decisions xt , the decision vector ut now also includes wait-and-see
decisions that are chosen after ?t+1 has been revealed (see Remark 4.1): ec represents the amount
of wind energy used to meet the commitment, e+,l represents the amount of wind energy fed into
storage l, e?,l represents the amount of energy from storage l used to meet the commitment, ew represents the amount of wind energy that is wasted, and eu represents the unmet energy commitment.
slt+1,h = ?l slt+1,h?1 + ?lc e+,l
t+1,h ?
Our computational experiment is based on day-ahead prices for the PJM market and wind speed data
for North Carolina (33.9375N, 77.9375W) and Ohio (41.8125N, 81.5625W) from 2002 to 2011 (520
weeks). As ?t is a 48 dimensional vector with high correlations between its components, we perform
principal component analysis to obtain a 6 dimensional subspace that explains more than 90% of
the variability of the historical observations. The conditional probabilities qt (?t ) are subsequently
estimated using the projected data points. The parameters for the storage devices are taken from
[30]. We solve the wind energy commitment problem using the DDP and RDDP algorithms (i.e.,
the algorithm of ? 4 with ? = 0 and ? = 1, respectively) as well as a persistence heuristic that naively
pledges the wind generation of the previous day by setting xt = ?t . Persistence was proposed as
a useful baseline in [28]. Note that problem (10) is beyond the scope of traditional reinforcement
learning algorithms due to the high dimensionality of the action spaces and the seasonalities in
the wind and price data. We train DDP and RDDP on the first 260 weeks and test the resulting
commitment strategies as well as the persistence heuristic on the last 260 weeks of the data set.
Table 2 reports the test statistics of the different algorithms. We find that the persistence heuristic
wins in terms of standard deviation, while RDDP wins in all other categories. However, the higher
standard deviation of RDDP can be explained by a heavier upper tail (which is indeed desirable).
Moreover, the profit distribution generated by RDDP stochastically dominates those generated by
DDP and the persistence heuristic; see Figure 4. Another major benefit of RDDP is that it cuts off
any losses (negative profits), whereas all other algorithms bear a significant risk of incurring a loss.
Concluding remarks The proposed RDDP algorithm combines ideas from robust optimization,
reinforcement learning and approximate dynamic programming. We remark that the N K convex
optimization problems arising in each backward induction step are independent of each other and
thus lend themselves to parallel implementation. We also emphasize that Assumption 4.1 could be
relaxed to allow ct and gt to display a general nonlinear dependence on st . This would invalidate
Corollary 4.1 but not Theorem 4.1. If one is willing to accept a potentially larger mismatch between
the true nonconvex cost-to-go function and its convex approximation architecture, then Algorithm 1
can even be applied to specific motor control, vehicle control or other nonlinear control problems.
Acknowledgments: This research was supported by EPSRC under grant EP/I014640/1.
8
References
[1] D.P. Bertsekas. Dynamic Programming and Optimal Control, Vol. II. Athena Scientific, 3rd edition, 2007.
[2] W. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality. Wiley-Blackwell,
2007.
[3] L. Devroye. The uniform convergence of the nadaraya-watson regression function estimate. Canadian
Journal of Statistics, 6(2):179?191, 1978.
[4] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2009.
[5] A. Keshavarz and S. Boyd. Quadratic approximate dynamic programming for input-affine systems. International Journal of Robust and Nonlinear Control, 2012. Forthcoming.
[6] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780?798, 2005.
[7] G. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257?280, 2005.
[8] S. Mannor, O. Mebel, and H. Xu. Lightning does not strike twice: Robust MDPs with coupled uncertainty.
In Proceedings of the 29th International Conference on Machine Learning, pages 385?392, 2012.
[9] W. Wiesemann, D. Kuhn, and B. Rustem. Robust Markov decision processes. Mathematics of Operations
Research, 38(1):153?183, 2013.
[10] M.G. Lagoudakis and R. Parr. Least-squares policy iteration. The Journal of Machine Learning Research,
4:1107?1149, 2003.
[11] D.P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control
Theory and Applications, 9(3):310?335, 2011.
[12] C.E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. In Advances in Neural
Information Processing Systems, pages 751?759, 2004.
[13] L. Bu?soniu, A. Lazaric, M. Ghavamzadeh, R. Munos, R. Babu?ka, and B. De Schutter. Least-squares
methods for policy iteration. In Reinforcement Learning, pages 75?109. Springer, 2012.
[14] X. Xu, T. Xie, D. Hu, and X. Lu. Kernel least-squares temporal difference learning. International Journal
of Information Technology, 11(9):54?63, 2005.
[15] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In Proceedings of
the 22nd International Conference on Machine Learning, pages 201?208, 2005.
[16] G. Taylor and R. Parr. Kernelized value function approximation for reinforcement learning. In Proceedings of the 26th International Conference on Machine Learning, pages 1017?1024, 2009.
[17] E.A. Nadaraya. On estimating regression. Theory of Probability & its Applications, 9(1):141?142, 1964.
[18] G.S. Watson. Smooth regression analysis. Sankhy?a: The Indian Journal of Statistics, Series A, 26(4):359?
372, 1964.
[19] B. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC, 1986.
[20] L. Hannah, W. Powell, and D. Blei. Nonparametric density estimation for stochastic optimization with an
observable state variable. In Advances in Neural Information Processing Systems, pages 820?828, 2010.
[21] A. Cybakov. Introduction to Nonparametric Estimation. Springer, 2009.
[22] L. Pardo. Statistical Inference Based on Divergence Measures, volume 185 of Statistics: A Series of
Textbooks and Monographs. Chapman and Hall/CRC, 2005.
[23] Z. Wang, P.W. Glynn, and Y. Ye. Likelihood robust optimization for data-driven newsvendor problems.
Technical report, Stanford University, 2009.
[24] A. Ben-Tal, D. den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen. Robust solutions of
optimization problems affected by uncertain probabilities. Management Science, 59(2):341?357, 2013.
[25] A. Shapiro, D. Dentcheva, and A. Ruszczy?nski. Lectures on Stochastic Programming: Modeling and
Theory. SIAM, 2009.
[26] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11-12:625?654, 1999.
[27] J. L?fberg. YALMIP : A toolbox for modeling and optimization in MATLAB. In Proceedings of the
CACSD Conference, 2004.
[28] L. Hannah and D. Dunson. Approximate dynamic programming for storage problems. In Proceedings of
the 28th International Conference on Machine Learning, pages 337?344, 2011.
[29] J.H. Kim and W.B. Powell. Optimal energy commitments with storage and intermittent supply. Operations Research, 59(6):1347?1360, 2011.
[30] M. Kraning, Y. Wang, E. Akuiyibo, and S. Boyd. Operation and configuration of a storage portfolio via
convex optimization. In Proceedings of the IFAC World Congress, pages 10487?10492, 2011.
9
| 5123 |@word mild:1 middle:1 polynomial:1 achievable:1 nd:1 d2:2 willing:1 simulation:1 seek:2 carolina:2 hu:1 incurs:1 thereby:1 tr:7 profit:6 harder:1 boundedness:1 ld:2 initial:1 minmax:1 series:2 exclusively:1 selecting:1 configuration:1 daniel:2 outperforms:2 existing:1 recovered:1 discretization:3 current:1 ka:1 yet:1 must:3 reminiscent:1 partition:1 motor:1 rd2:1 greedy:1 selected:1 device:2 xk:1 beginning:1 core:1 short:2 blei:1 mannor:2 seasonalities:1 mathematical:1 along:1 become:1 supply:2 ect:2 combine:1 behavioral:1 incentivizes:1 inside:1 indeed:4 market:2 expected:8 themselves:1 planning:3 terminal:4 bellman:1 actual:1 curse:1 considering:1 becomes:2 provided:1 estimating:1 underlying:1 moreover:5 notation:1 mass:1 interpreted:2 textbook:1 guarantee:1 temporal:1 mitigate:3 every:2 wiesemann:1 ti:14 rustem:1 exactly:1 rm:2 uk:2 control:18 unit:3 grant:1 discharging:1 bertsekas:2 before:2 service:1 engineering:1 positive:3 hourly:3 congress:1 limit:1 aiming:1 consequence:1 meet:2 path:1 interpolation:3 approximately:2 twice:2 initialization:1 studied:4 conversely:1 lausanne:2 suggests:1 collect:1 limited:1 nadaraya:4 cdfs:3 range:1 collapse:1 obeys:1 nemirovski:1 practical:2 acknowledgment:1 eut:1 practice:2 definite:1 silverman:1 procedure:3 soniu:1 powell:3 empirical:1 composite:1 boyd:2 persistence:7 confidence:1 regular:1 spite:1 suggest:1 wait:2 cannot:1 close:4 interior:2 scheduling:1 storage:10 context:3 impossible:1 risk:1 optimize:2 equivalent:1 restriction:1 go:17 primitive:1 convex:13 survey:1 simplicity:1 assigns:1 immediately:1 estimator:8 oh:1 financial:1 nominal:7 suppose:1 programming:26 us:1 element:1 showcase:1 std:3 cut:1 distributional:6 observed:2 epsrc:1 ep:1 solved:3 capture:2 worst:12 wang:2 eu:1 russell:1 substantial:1 monograph:1 convexity:3 ui:1 dynamic:30 ghavamzadeh:1 solving:4 efficiency:2 completely:1 basis:1 yalmip:2 bidding:1 easily:2 stylized:1 stock:1 various:1 routinely:1 train:2 distinct:1 forced:1 shortcoming:1 london:2 describe:1 artificial:1 whose:2 fluctuates:1 emerged:1 solve:3 heuristic:4 larger:1 stanford:1 statistic:11 qt1:1 invested:1 noisy:3 envisage:1 analytical:1 propose:3 commitment:11 combining:1 poorly:1 kh:3 az:1 convergence:1 generating:1 converges:1 ben:2 hertog:1 ac:1 fixing:1 qt:11 advocated:1 aug:1 nadarayawatson:1 solves:2 involves:1 implies:1 trading:3 convention:1 kuhn:3 switzerland:1 closely:1 stochastic:7 subsequently:1 bin:3 crc:2 explains:2 fix:1 secondly:2 hold:2 around:1 sufficiently:1 hall:2 newsvendor:1 nw:8 scope:2 week:3 parr:2 substituting:1 pjm:1 major:3 continuum:2 early:1 adopt:1 purpose:1 estimation:10 maker:5 amplifying:1 cole:1 exceptional:1 engel:1 minimization:2 clearly:1 iyengar:1 gaussian:4 aim:3 pn:1 conjunction:1 corollary:2 inherits:2 likelihood:1 contrast:2 baseline:1 sense:4 burdensome:1 kim:1 inference:1 dependent:1 el:2 epfl:1 typically:5 nasdaq:1 accept:1 initially:1 kernelized:1 relation:2 selects:1 comprising:1 mitigating:2 overall:1 denoted:1 priori:1 equal:1 construct:3 once:1 chapman:2 represents:7 r5:2 filling:1 sankhy:1 future:2 simplex:1 report:3 others:1 fundamentally:2 piecewise:5 employ:1 producer:4 randomly:1 preserve:1 divergence:2 cplex:1 n1:2 maintain:1 highly:1 possibility:1 evaluation:1 generically:1 semidefinite:3 unconditional:1 chain:1 sedumi:2 respective:1 mebel:1 indexed:1 taylor:1 uncertain:4 instance:2 modeling:3 dev:3 cover:1 maximization:1 ordinary:1 cost:38 applicability:1 deviation:3 costto:1 uniform:1 reported:1 combined:1 chooses:2 st:74 density:3 calibrated:1 international:6 nski:1 accessible:1 siam:1 bu:1 systematic:1 off:1 qtn:1 again:1 squared:4 ear:1 management:2 opposed:1 choose:1 slowly:1 external:1 admit:1 steered:1 stochastically:2 return:3 potential:1 de:3 singleton:1 bold:2 includes:3 north:2 babu:1 caused:1 depends:1 vehicle:1 view:4 wind:15 endogenous:12 exogenous:17 optimistic:2 portion:1 parallel:1 minimize:1 square:6 variance:2 largely:1 efficiently:4 yield:1 conceptually:2 lu:1 trajectory:7 asset:1 kuss:1 history:2 whenever:1 evaluates:1 underestimate:1 against:2 energy:16 frequency:1 glynn:1 naturally:1 associated:1 mi:8 static:2 propagated:2 sampled:1 popular:1 knowledge:3 ut:46 dimensionality:2 anticipating:1 higher:2 day:18 follow:1 xie:1 improved:1 formulation:3 evaluated:2 mar:1 generality:1 stage:1 correlation:3 sturm:1 nonlinear:3 lack:1 perhaps:1 scientific:1 grows:3 mdp:1 facilitate:1 effect:6 ye:1 normalized:1 true:13 unbiased:4 requiring:1 evolution:2 regularization:3 assigned:2 contain:1 symmetric:3 leibler:1 nonzero:2 i2:1 game:1 percentile:4 coincides:1 fberg:1 evident:1 tt:6 polytechnique:1 demonstrate:3 qti:6 interface:1 ohio:1 recently:1 lagoudakis:1 vtd:8 volume:1 extend:1 tail:1 accumulate:1 significant:3 enter:1 rd:1 outlined:1 mathematics:2 provisioning:1 portfolio:4 lightning:1 invalidate:1 gt:11 etc:1 multivariate:4 closest:1 recent:1 belongs:1 driven:14 scenario:1 certain:1 nonconvex:1 inequality:1 watson:4 vt:34 minimum:1 relaxed:1 determine:1 converge:1 period:2 maximize:1 strike:1 ii:1 full:2 desirable:1 reduces:4 smooth:1 technical:1 match:1 ifac:1 calculation:1 cross:1 lin:1 controlled:1 qi:1 variant:1 regression:8 expectation:13 iteration:8 kernel:10 represent:1 histogram:1 achieved:2 justified:1 whereas:1 median:4 malicious:1 envelope:3 leveraging:1 seem:1 near:2 backwards:1 revealed:3 canadian:1 identically:1 enough:1 zi:4 forthcoming:1 architecture:4 bandwidth:5 reduce:2 idea:2 inner:1 expression:1 heavier:1 penalty:1 suffer:1 cause:1 action:11 remark:4 matlab:2 useful:1 clear:1 maybe:1 amount:5 nonparametric:4 discount:1 category:1 reduced:1 generate:1 shapiro:1 sl:2 meir:1 estimated:9 arising:3 neuroscience:1 track:1 per:1 lazaric:1 discrete:4 threefold:1 vol:1 affected:1 imperial:2 backward:3 asymptotically:3 wasted:1 cone:5 sum:4 run:1 facilitated:1 letter:2 uncertainty:12 place:1 extends:1 throughout:1 separation:1 decision:26 scaling:1 ct:11 ddp:24 emergency:1 display:4 replaces:2 quadratic:9 r24:8 waegenaere:1 ahead:4 software:1 tal:2 pardo:1 u1:1 speed:1 argument:1 optimality:1 min:10 extremely:1 prescribed:1 concluding:1 combination:1 poor:2 representable:1 across:1 appealing:3 making:1 explained:2 den:1 ghaoui:2 taken:1 computationally:3 previously:1 remains:2 r3:1 tractable:4 instrument:1 end:4 fed:1 operation:5 incurring:1 apply:1 simulating:1 rennen:1 robustness:1 denotes:2 remaining:1 classical:2 skt:11 lspi:6 leakage:1 implied:4 move:1 unrestrictive:1 objective:3 ruszczy:1 parametric:4 strategy:2 dependence:5 usual:1 traditional:1 unclear:1 exhibit:3 dp:7 detrimental:2 distance:6 subspace:1 win:2 capacity:1 athena:1 originate:1 reason:1 induction:3 assuming:2 besides:1 devroye:1 modeled:1 index:11 nc:1 susceptible:2 dunson:1 potentially:1 negative:1 rise:1 dentcheva:1 implementation:1 reliably:1 policy:10 unknown:3 perform:2 allowing:1 upper:2 observation:9 markov:6 benchmark:1 finite:2 displayed:1 variability:7 sd1:1 committed:1 rn:4 intermittent:1 inferred:2 blackwell:1 toolbox:2 distinction:1 hour:2 beyond:1 adversary:1 proceeds:1 usually:1 mismatch:1 rale:1 program:11 max:4 memory:1 lend:1 charging:1 power:2 suitable:1 natural:1 rely:2 disturbance:1 solvable:1 recursion:9 scheme:8 technology:1 mdps:1 numerous:1 conic:3 naive:1 coupled:1 sn:2 deviate:1 relative:2 unsatisfied:1 loss:3 fully:1 permutation:1 bear:1 lecture:1 generation:1 allocation:1 validation:1 degree:2 affine:2 consistent:3 imposes:1 principle:1 intractability:5 pi:3 production:2 summary:1 surprisingly:1 last:2 supported:1 rasmussen:1 bias:8 side:1 allow:2 fall:1 face:3 slt:8 munos:1 sparse:4 benefit:4 distributed:1 dimension:3 transition:3 cumulative:2 avoids:1 computes:4 ignores:1 valid:1 world:1 reinforcement:7 projected:1 historical:10 ec:1 approximate:12 emphasize:2 ignore:1 schutter:1 kullback:1 observable:1 overfitting:6 sequentially:1 continuous:3 search:1 quantifies:1 sk:3 reality:1 table:4 nature:3 robust:25 ignoring:1 symmetry:1 inventory:2 domain:1 yi2:1 motivation:1 noise:1 edition:1 xu:2 site:2 overconfidence:1 fashion:1 wiley:1 lc:2 nilim:1 comprises:1 hannah:2 theorem:2 xt:6 specific:1 jensen:1 admits:1 dominates:5 sit:5 essential:1 naively:1 magnitude:1 downward:3 illustrates:2 demand:2 horizon:3 rd1:2 univariate:2 infinitehorizon:1 melenberg:1 expressed:1 tracking:11 springer:2 ch:2 nested:1 relies:1 cdf:3 conditional:25 viewed:1 month:3 price:6 feasible:2 determined:1 uniformly:1 operates:1 principal:1 conservative:1 total:1 experimental:1 ew:2 select:1 college:1 latter:1 cacsd:1 indian:1 phenomenon:3 evaluate:5 princeton:1 d1:3 vtr:7 |
4,558 | 5,124 | Online Variational Approximations to
non-Exponential Family Change Point Models:
With Application to Radar Tracking
Ryan Turner
Northrop Grumman Corp.
[email protected]
Steven Bottone
Northrop Grumman Corp.
[email protected]
Clay Stanek
Northrop Grumman Corp.
[email protected]
Abstract
The Bayesian online change point detection (BOCPD) algorithm provides an efficient way to do exact inference when the parameters of an underlying model
may suddenly change over time. BOCPD requires computation of the underlying model?s posterior predictives, which can only be computed online in O(1)
time and memory for exponential family models. We develop variational approximations to the posterior on change point times (formulated as run lengths) for
efficient inference when the underlying model is not in the exponential family,
and does not have tractable posterior predictive distributions. In doing so, we develop improvements to online variational inference. We apply our methodology
to a tracking problem using radar data with a signal-to-noise feature that is Rice
distributed. We also develop a variational method for inferring the parameters of
the (non-exponential family) Rice distribution.
Change point detection has been applied to many applications [5; 7]. In recent years there have been
great improvements to the Bayesian approaches via the Bayesian online change point detection
algorithm (BOCPD) [1; 23; 27]. Likewise, the radar tracking community has been improving in its
use of feature-aided tracking [10]: methods that use auxiliary information from radar returns such
as signal-to-noise ratio (SNR), which depend on radar cross sections (RCS) [21]. Older systems
would often filter only noisy position (and perhaps Doppler) measurements while newer systems use
more information to improve performance. We use BOCPD for modeling the RCS feature. Whereas
BOCPD inference could be done exactly when finding change points in conjugate exponential family
models the physics of RCS measurements often causes them to be distributed in non-exponential
family ways, often following a Rice distribution. To do inference efficiently we call upon variational
Bayes (VB) to find approximate posterior (predictive) distributions. Furthermore, the nature of both
BOCPD and tracking require the use of online updating. We improve upon the existing and limited
approaches to online VB [24; 13]. This paper produces contributions to, and builds upon background
from, three independent areas: change point detection, variational Bayes, and radar tracking.
Although the emphasis in machine learning is on filtering, a substantial part of tracking with radar
data involves data association, illustrated in Figure 1. Observations of radar returns contain measurements from multiple objects (targets) in the sky. If we knew which radar return corresponded
to which target we would be presented with NT ? N0 independent filtering problems; Kalman
filters [14] (or their nonlinear extensions) are applied to ?average out? the kinematic errors in the
measurements (typically positions) using the measurements associated with each target. The data
association problem is to determine which measurement goes to which track. In the classical setup,
once a particular measurement is associated with a certain target, that measurement is plugged into
the filter for that target as if we knew with certainty it was the correct assignment. The association
algorithms, in effect, find the maximum a posteriori (MAP) estimate on the measurement-to-track
association. However, approaches such as the joint probabilistic data association (JPDA) filter [2]
and the probability hypothesis density (PHD) filter [16] have deviated from this.
1
To find the MAP estimate a log likelihood of the data under each possible assignment vector a must
be computed. These are then used to construct cost matrices that reduce the assignment problem to a
particular kind of optimization problem (the details of which are beyond the scope of this paper). The
motivation behind feature-aided tracking is that additional features increase the probability that the
MAP measurement-to-track assignment is correct. Based on physical arguments the RCS feature
(SNR) is often Rice distributed [21, Ch. 3]; although, in certain situations RCS is exponential or
gamma distributed [26]. The parameters of the RCS distribution are determined by factors such as
the shape of the aircraft facing the radar sensor. Given that different aircraft have different RCS
characteristics, if one attempts to create a continuous track estimating the path of an aircraft, RCS
features may help distinguish one aircraft from another if they cross paths or come near one another,
for example. RCS also helps distinguish genuine aircraft returns from clutter: a flock of birds or
random electrical noise, for example. However, the parameters of the RCS distributions may also
change for the same aircraft due to a change in angle or ground conditions. These must be taken into
account for accurate association. Providing good predictions in light of a possible sudden change in
the parameters of a time series is ?right up the alley? of BOCPD and change point methods.
The original BOCPD papers [1; 11] studied sudden changes in the parameters of exponential family
models for time series. In this paper, we expand the set of applications of BOCPD to radar SNR
data which often has the same change point structure found in other applications, and requires online
predictions. The BOCPD model is highly modular in that it looks for changes in the parameters of
any underlying process model (UPM). The UPM merely needs to provide posterior predictive probabilities, the UPM can otherwise be a ?black box.? The BOCPD queries the UPM for a prediction
of the next data point under each possible run length, the number of points since the last change
point. If (and only if by Hipp [12]) the UPM is exponential family (with a conjugate prior) the
posterior is computed by accumulating the sufficient statistics since the last potential change point.
This allows for O(1) UPM updates in both computation and memory as the run length increases.
We motivate the use of VB for implementing UPMs when the data within a regime is believed to
follow a distribution that is not exponential family. The methods presented in this paper can be used
to find variational run length posteriors for general non-exponential family UPMs in addition to the
Rice distribution. Additionally, the methods for improving online updating in VB (Section 2.2) are
applicable in areas outside of change point detection.
Likelihood
clutter (birds)
track 1 (747)
track 2 (EMB 110)
0
5
10
15
20
SNR
Figure 1: Illustrative example of a tracking scenario: The black lines (?) show the true tracks while the red
stars (?) show the state estimates over time for track 2 and the blue stars for track 1. The 95% credible regions
on the states are shown as blue ellipses. The current (+) and previous (?) measurements are connected to their
associated tracks via red lines. The clutter measurements (birds in this case) are shown with black dots (?). The
distributions on the SNR (RCS) for each track (blue and red) and the clutter (black) are shown on the right.
To our knowledge this paper is the first to demonstrate how to compute Bayesian posterior distributions on the parameters of a Rice distribution; the closest work would be Lauwers et al. [15],
which computes a MAP estimate. Other novel factors of this paper include: demonstrating the usefulness (and advantages over existing techniques) of change point detection for RCS estimation and
tracking; and applying variational inference for UPMs where analytic posterior predictives are not
possible. This paper provides four main technical contributions: 1) VB inference for inferring the
parameters of a Rice distribution. 2) General improvements to online VB (which is then applied to
updating the UPM in BOCPD). 3) Derive a VB approximation to the run length posterior when the
UPM posterior predictive is intractable. 4) Handle censored measurements (particularly for a Rice
distribution) in VB. This is key for processing missed detections in data association.
2
1
Background
In this section we briefly review the three areas of background: BOCPD, VB, and tracking.
1.1
Bayesian Online Change Point Detection
We briefly summarize the model setup and notation for the BOCPD algorithm; see [27, Ch. 5] for a
detailed description. We assume we have a time series with n observations so far y1 , . . . , yn ? Y. In
effect, BOCPD performs message passing to do online inference on the run length rn ? 0:n ? 1, the
number of observations since the last change point. Given an underlying predictive model (UPM)
and a hazard function h, we can compute an exact posterior over the run length rn . Conditional on a
run length, the UPM produces a sequential prediction on the next data point using all the data since
the last change point: p(yn |y(r) , ?m ) where (r) := (n ? r):(n ? 1). The UPM is a simpler model
where the parameters ? change at every change point and are modeled as being sampled from a prior
with hyper-parameters ?m . The canonical example of a UPM would be a Gaussian whose mean
and variance change at every change point. The online updates are summarized as:
X
msgn := p(rn , y1:n ) =
P (rn |rn?1 ) p(yn |rn?1 , y(r) ) p(rn?1 , y1:n?1 ) .
(1)
|
{z
}|
{z
}
{z
}|
rn?1
hazard
UPM
msgn?1
Unless rn = 0, the sum in (1) only contains one term since the only possibility is that rn?1 = rn ?1.
The indexing convention is such that if rn = 0 then yn+1 is the first observation sampled from the
new parameters ?. The marginal posterior predictive on the next data point is easily calculated as:
X
p(yn+1 |y1:n ) =
p(yn+1 |y(r) )P (rn |y1:n ) .
(2)
rn
Thus, the predictions from BOCPD fully integrate out any uncertainty in ?. The message updates
(1) perform exact inference under a model where the number of change points is not known a priori.
BOCPD RCS Model We show the Rice UPM as an example as it is required for our application.
The data within a regime are assumed to be iid Rice observations, with a normal-gamma prior:
yn ? Rice(?, ?) , ? ? N (?0 , ? 2 /?0 ) , ? ?2 =: ? ? Gamma(?0 , ?0 ) (3)
=? p(yn |?, ?) = yn ? exp(?? (yn2 + ? 2 )/2)I0 (yn ?? )I{yn ? 0}
(4)
where I0 (?) is a modified Bessel function of order zero, which is what excludes the Rice distribution
from the exponential family. Although the normal-gamma is not conjugate to a Rice it will enable
us to use the VB-EM algorithm. The UPM parameters are the Rice shape1 ? ? R and scale ? ? R+ ,
? := {?, ?}, and the hyper-parameters are the normal-gamma parameters ?m := {?0 , ?0 , ?0 , ?0 }.
Every change point results in a new value for ? and ? being sampled. A posterior on ? is maintained
for each run length, i.e. every possible starting point for the current regime, and is updated at each
new data point. Therefore, BOCPD maintains n distinct posteriors on ?, and although this can be
reduced with pruning, it necessitates posterior updates on ? that are computationally efficient.
Note that the run length updates in (1) require the UPM to provide predictive log likelihoods at all
sample sizes rn (including zero). Therefore, UPM implementations using such approximations as
plug-in MLE predictions will not work very well. The MLE may not even be defined for run lengths
smaller than the number of UPM parameters |?|. For a Rice UPM, the efficient O(1) updating in
exponential family models by using a conjugate prior and accumulating sufficient statistics is not
possible. This motivates the use of VB methods for approximating the UPM predictions.
1.2
Variational Bayes
We follow the framework of VB where when computation of the exact posterior distribution
p(?|y1:n ) is intractable it is often possible to create a variational approximation q(?) that is locally optimal in terms of the Kullback-Leibler (KL) divergence KL(qkp) while constraining q to be
in a certain family of distributions Q. In general this is done by optimizing a lower bound L(q) on
the evidence log p(y1:n ), using either gradient based methods or standard fixed point equations.
1
The shape ? is usually assumed to be positive (? R+ ); however, there is nothing wrong with using a
negative ? as Rice(x|?, ?) = Rice(x|??, ?). It also allows for use of a normal-gamma prior.
3
The VB-EM Algorithm In many cases, such as the Rice UPM, the derivation of the VB fixed point
equations can be simplified by applying the VB-EM algorithm [3]. VB-EM is applicable to models
that are conjugate-exponential (CE) after being augmented with latent variables x1:n . A model is
CE if: 1) The complete data likelihood p(x1:n , y1:n |?) is an exponential family distribution; and 2)
the prior p(?) is a conjugate prior for the complete data likelihood p(x1:n , y1:n |?). We only have
to constrain the posterior q(?, x1:n ) = q(?)q(x1:n ) to factorize between the latent variables and the
parameters; we do not constrain the posterior to be of any particular parametric form. Requiring the
complete likelihood to be CE is a much weaker condition than requiring the marginal on the observed
data p(y1:n |?) to be CE. Consider a mixture of Gaussians: the model becomes CE when augmented
with latent variables (class labels). This is also the case for the Rice distribution (Section 2.1).
Like the ordinary EM algorithm [9] the VB-EM algorithm alternates between two steps: 1) Find the
posterior of the latent variables treating the expected natural parameters ?? := Eq(?) [?] as correct:
q(xi ) ? p(xi |yi , ? = ??). 2) Find the posterior of the parameters using the expected sufficient statistics S? := Eq(x1:n ) [S(x1:n , y1:n )] as if they were the sufficient statistics for the complete data set:
? The posterior will be of the same exponential family as the prior.
q(?) ? p(?|S(x1:n , y1:n ) = S).
1.3
Tracking
In this section we review data association, which along with filtering constitutes tracking. In data
association we estimate the association vectors a which map measurements to tracks. At each time
NZ (n)
step, n ? N1 , we observe NZ (n) ? N0 measurements, Zn = {zi,n }i=1
, which includes returns
from both real targets and clutter (spurious measurements). Here, zi,n ? Z is a vector of kinematic
measurements (positions in R3 , or R4 with a Doppler), augmented with an RCS component R ? R+
for the measured SNR, at time tn ? R. The assignment vector at time tn is such that an (i) = j
if measurement i is associated with track j > 0; an (i) = 0 if measurement i is clutter. The
?1
inverse mapping a?1
n maps tracks to measurements: meaning an (an (i)) = i if an (i) 6= 0; and
(i)
=
0
?
a
(j)
=
6
i
for
all
j.
For
example,
if
N
=
4
and
a = [2 0 0 1 4] then NZ = 5,
a?1
n
T
n
Nc = 2, and a?1 = [4 1 0 5]. Each track is associated with at most one measurement, and vice-versa.
In N D data association we jointly find the MAP estimate of the association vectors over a sliding
window of the last N ? 1 time steps. We assume we have NT (n) ? N0 total tracks as a known
parameter: NT (n) is adjusted over time using various algorithms (see [2, Ch. 3]). In the generative
process each track places a probability distribution on the next N ? 1 measurements, with both
kinematic and RCS components. However, if the random RCS R for a measurement is below R0
then it will not be observed. There are Nc (n) ? N0 clutter measurements from a Poisson process
with ? := E[Nc (n)] (often with uniform intensity). The ordering of measurements in Zn is assumed
to be uniformly random. For 3D data association the model joint p(Zn?1:n , an?1 , an |Z1:n?2 ) is:
|Zi |
NT
n
Y
Y
Y
pi (za?n1 (i),n , za?1 (i),n?1 ) ?
?Nc (i) exp(??)/|Zi |! p0 (zj,i )I{ai (j)=0} ,
(5)
n?1
i=1
i=n?1
j=1
where pi is the probability of the measurement sequence under track i; p0 is the clutter distribution.
The probability pi is the product of the RCS component predictions (BOCPD) and the kinematic
components (filter); informally, pi (z) = pi (positions) ? pi (RCS). If there is a missed detection, i.e.
1
a?1
) = P (R < R0 ) under the RCS model for track i with no conn (i) = 0, we then use pi (za?
n (i),n
tribution from positional (kinematic) component. Just as BOCPD allows any black box probabilistic
predictor to be used as a UPM, any black box model of measurement sequences can used in (5).
The estimation of association vectors for the 3D case becomes an optimization problem of the form:
?n ) = argmax log P (an?1 , an |Z1:n ) = argmax log p(Zn?1:n , an?1 , an |Z1:n?2 ) , (6)
(?
an?1 , a
(an?1 ,an )
(an?1 ,an )
which is effectively optimizing (5) with respect to the assignment vectors. The optimization given
in (6) can be cast as a multidimensional assignment (MDA) problem [2], which can be solved efficiently in the 2D case. Higher dimensional assignment problems, however, are NP-hard; approximate, yet typically very accurate, solvers must be used for real-time operation, which is usually
required for tracking systems [20].
If a radar scan occurs at each time step and a target is not detected, we assume the SNR has not
exceeded the threshold, implying 0 ? R < R0 . This is a (left) censored measurement and is treated
differently than a missing data point. Censoring is accounted for in Section 2.3.
4
2
Online Variational UPMs
We cover the four technical challenges for implementing non-exponential family UPMs in an efficient and online manner. We drop the index of the data point i when it is clear from context.
2.1
Variational Posterior for a Rice Distribution
The Rice distribution has the property that
x ? N (?, ? 2 ) ,
y 0 ? N (0, ? 2 ) =? R =
p
x2 + y 02 ? Rice(?, ?) .
(7)
For simplicity we perform inference using R2 , as opposed to R, and transform accordingly:
x ? N (?, ? 2 ) ,
R2 ? x2 ? Gamma( 21 , ?2 ) ,
? := 1/? 2 ? R+
=? p(R2 , x) = p(R2 |x)p(x) = Gamma(R2 ? x2 | 12 , ?2 )N (x|?, ? 2 ) .
(8)
The complete likelihood (8) is the product of two exponential family models and is exponential
family itself, parameterized with base measure h and partition factor g:
p
? = [??, ?? /2]> , S = [x, R2 ]> , h(R2 , x) = (2? R2 ? x2 )?1 , g(?, ? ) = ? exp(?? 2 ? /2) .
By inspection we see that the natural parameters ? and sufficient statistics S are the same as a
Gaussian with unknown mean and variance. Therefore, we apply the normal-gamma prior on (?, ? )
as it is the conjugate prior for the complete data likelihood. This allows us to apply the VB-EM
algorithm. We use yi := Ri2 as the VB observation, not Ri as in (3). In (5), z?,? (end) is the RCS R.
VB M-Step
We derive the posterior updates to the parameters given expected sufficient statistics:
P
n
X
?0 ?0 + i E[xi ]
x
? :=
E[xi ]/n , ?n =
, ?n = ?0 + n , ?n = ?0 + n ,
(9)
?0 + n
i=1
n
n
1X
1 n?0
1X 2
?n = ?0 +
(E[xi ] ? x
?)2 +
(?
x ? ?0 )2 +
R ? E[xi ]2 .
(10)
2 i=1
2 ?0 + n
2 i=1 i
This is the same as an observation from a Gaussian and a gamma that share a (inverse) scale ? .
? The expectation E[R2 |R2 ] =
VB E-Step We then must find both expected sufficient statistics S.
i
i
Ri2 trivially; leaving E[xi |Ri2 ]. Recall that the joint on (x, y 0 ) is a bivariate normal; if we constrain
the radius to R, the angle ? will be distributed by a von Mises (VM) distribution. Therefore,
? := arccos(x/R) ? VM(0, ?) , ? = R E[?? ] =? E[x] = R E[cos ?] = RI1 (?)/I0 (?) , (11)
where computing ? constitutes the VB E-step and we have used the trigonometric moment on ? [18].
This completes the computations required to do the VB updates on the Rice posterior.
Variational Lower Bound For completeness, and to assess convergence, we derive the VB lower
bound L(q). Using the standard formula [4] for L(q) = Eq [log p(y1:n , x1:n , ?)] + H[q] we get:
n
X
E[log ? /2] ? 12 E[? ]Ri2 + (E[?? ] ? ?i /Ri )E[xi ] ? 21 E[? 2 ? ] + log I0 (?i ) ? KL(qkp) , (12)
i=1
where p in the KL is the prior on (?, ? ) which is easy to compute as q and p are both normal-gamma.
Equivalently, (12) can be optimized directly instead of using the VB-EM updates.
2.2
Online Variational Inference
In Section 2.1 we derived an efficient way to compute the variational posterior for a Rice distribution
for a fixed data set. However, as is apparent from (1) we need online predictions from the UPM;
we must be able to update the posterior one data point at a time. When the UPM is exponential
family and we can compute the posterior exactly, we merely use the posterior from the previous step
as the prior. However, since we are only computing a variational approximation to the posterior,
using the previous posterior as the prior does not give the exact same answer as re-computing the
posterior from batch. This gives two obvious options: 1) recompute the posterior from batch every
update at O(n) cost or 2) use the previous posterior as the prior at O(1) cost and reduced accuracy.
5
The difference
between the options is encapsulated by looking at the expected sufficient statistics:
Pn
S? = i=1 Eq(xi |y1:n ) [S(xi , yi )].
online updating uses old expected sufficient statistics whose
PNaive
n
posterior effectively uses S? = i=1 Eq(xi |y1:i ) [S(xi , yi )]. We get the best of both worlds if we
adjust those estimates over time. We in fact can do this if we project the expected sufficient statistics
into a ?feature space? in terms of the expected natural parameters. For some function f ,
q(xi ) = p(xi |yi , ? = ??) =? Eq(xi |y1:n ) [S(xi , yi )] = f (yi , ??) .
If f is piecewise continuous then we can represent it with an inner product [8, Sec. 2.1.6]
Pn
Pn
f (yi , ??) = ?(?
? )> ?(yi ) =? S? =
?(?
? )> ?(yi ) = ?(?
? )>
?(yi ) ,
i=1
i=1
(13)
(14)
where an infinite dimensional ? and ? may be required for exact representation, but can be approximated by a finite inner product. In the Rice distribution case we use (11)
f (yi , ??) = E[xi ] = Ri I 0 (Ri E[?? ]) = Ri I 0 ((Ri /?0 ) ?0 E[?? ]) ,
I 0 (?) := I1 (?)/I0 (?) ,
(15)
Ri2
where recall that yi =
and ??1 = E[?? ]. We can easily represent f with an inner product if we can
represent I 0 as an inner product: I 0 (uv) = ?(u)> ?(v). We use unitless ?i (u) = I 0 (ci u) with c1:G
as a log-linear grid from 10?2 to 103 and G = 50. We use a lookup table for ?(v) that was trained to
match I 0 using non-negative least squares, which left us with a sparse lookup table. Online updating
for VB posteriors was also developed in [24; 13]. These methods involved introducing forgetting
factors to forget the contributions from old data points that might be detrimental to accuracy. Since
the VB predictions are ?embedded? in a change point method, they are automatically phased out if
the posterior predictions become inaccurate making the forgetting factors unnecessary.
2.3
Censored Data
As mentioned in Section 1.3, we must handle censored RCS observations during a missed detection.
In the VB-EM framework we merely have to compute the expected sufficient statistics given the
censored measurement: E[S|R < R0 ]. The expected sufficient statistic from (11) is now:
Z R0
E[x|R < R0 ] =
E[x|R]p(R)dR RiceCDF (R0 |?, ? ) = ?(1 ? Q2 ( ?? , R?0 ))/(1 ? Q1 ( ?? , R?0 )) ,
0
where QM is the Marcum Q function [17] of order M . Similar updates for E[S|R < R0 ] are
possible for exponential or gamma UPMs, but are not shown as they are relatively easy to derive.
2.4
Variational Run Length Posteriors: Predictive Log Likelihoods
Both updating the BOCPD run length posterior (1) and finding the marginal predictive log likelihood of the next point (2) require calculating the UPM?s posterior predictive log likelihood
log p(yn+1 |rn , y(r) ). The marginal posterior predictive from (2) is used in data association (6) and
benchmarking BOCPD against other methods. However, the exact posterior predictive distribution
obtained by integrating the Rice likelihood against the VB posterior is difficult to compute.
We can break the BOCPD update (1) into a time and measurement update. The measurement update
corresponds to a Bayesian model comparison (BMC) calculation with prior p(rn |y1:n ):
p(rn |y1:n+1 ) ? p(yn+1 |rn , y(r) )p(rn |y1:n ) .
(16)
Using the BMC results in Bishop [4, Sec. 10.1.4] we find a variational posterior on the run length by
using the variational lower bound for each run length Li (q) ? log p(yn+1 |rn = i, y(r) ), calculated
using (12), as a proxy for the exact UPM posterior predictive in (16). This gives the exact VB
posterior if the approximating family Q is of the form:
q(rn , ?, x) = qUPM (?, x|rn )q(rn ) =? q(rn = i) = exp(Li (q))p(rn = i|y1:n )/ exp(L(q)) , (17)
where qUPM contains whatever constraints we used
P to compute Li (q). The normalizer on q(rn )
serves as a joint VB lower bound: L(q) = log i exp(Li (q))p(rn = i|y1:n ) ? log p(yn+1 |y1:n ).
Note that the conditional factorization is different than the typical independence constraint on q.
Furthermore, we derive the estimation of the assignment vectors a in (6) as a VB routine. We use
a similar conditional constraint on the latent BOCPD variables given the assignment and constrain
the assignment posterior to be a point mass. In the 2D assignment case, for example,
?n } ,
q(an , X1:NT ) = q(X1:NT |an )q(an ) = q(X1:NT |an )I{an = a
(18)
6
2
10
0
10
?1
10
?2
10
10
20
30
40
50
RCS RMSE (dBsm)
RCS RMSE (dBsm)
10
KL (nats)
5
10
1
8
6
4
2
3
2
1
0
0
0
100
200
Sample Size
(a) Online Updating
4
300
Time
(b) Exponential RCS
400
0
100
200
300
400
Time
(c) Rice RCS
Figure 2: Left: KL from naive updating (4), Sato?s method [24] (), and improved online VB (?) to the
batch VB posterior vs. sample size n; using a standard normal-gamma prior. Each curve represents a true ?
in the generating Rice distribution: ? = 3.16 (red), ? = 10.0 (green), ? = 31.6 (blue) and ? = 1. Middle:
The RMSE (dB scale) of the estimate on the mean RCS distribution E[Rn ] is plotted for an exponential RCS
model. The curves are BOCPD (blue), IMM (black), identity (magenta), ?-filter (green), and median filter
(red). Right: Same as the middle but for the Rice RCS case. The dashed lines are 95% confidence intervals.
where each track?s Xi represents all the latent variables used to compute the variational lower bound
on log p(zj,n |an (j) = i). In the BOCPD case, Xi := {rn , x, ?}. The resulting VB fixed point
?n as the true assignment and solving
equations find the posterior on the latent variables Xi by taking a
?n is found by using (6) and taking the joint BOCPD lower
the VB problem of (17); the assignment a
bound L(q) as a proxy for the BOCPD predictive log likelihood component of log pi in (5).
3
3.1
Results
Improved Online Solution
We first demonstrate the accuracy of the online VB approximation (Section 2.2) on a Rice estimation example; here, we only test the VB posterior as no change point detection is applied. Figure 2(a) compares naive online updating, Sato?s method [24], and our improved online updating in
KL(onlinekbatch) of the posteriors for three different true parameters ? as sample size n increases.
The performance curves are the KL divergence between these online approximations to the posterior
and the batch VB solution (i.e. restarting VB from ?scratch? every new data point) vs sample size.
The error for our method stays around a modest 10?2 nats while naive updating incurs large errors
of 1 to 50 nats [19, Ch. 4]. Sato?s method tends to settle in around a 1 nat approximation error. The
recommended annealing schedule, i.e. forgetting factors, in [24] performed worse than naive updating. We did a grid search over annealing exponents and show the results for the best performing
schedule of n?0.52 . By contrast, our method does not require the tuning of an annealing schedule.
3.2
RCS Estimation Benchmarking
We now compare BOCPD with other methods for RCS estimation. We use the same experimental
example as Slocumb and Klusman III [25], which uses an augmented interacting multiple model
(IMM) based method for estimating the RCS; we also compare against the same ?-filter and median
filter used in [25]. As a reference point, we also consider the ?identity filter? which is merely an
unbiased filter that uses only yn to estimate the mean RCS E[Rn ] at time step n. We extend this
example to look at Rice RCS in addition to the exponential RCS case. The bias correction constants
in the IMM were adjusted for the Rice distribution case as per [25, Sec. 3.4].
The results on exponential distributions used in [25] and the Rice distribution case are shown in Figures 2(b) and 2(c). The IMM used in [25] was hard-coded to expect jumps in the SNR of multiples
of ?10 dB, which is exactly what is presented in the example (a sequence of 20, 10, 30, and 10 dB).
In [25] the authors mention that the IMM reaches an RMSE ?floor? at 2 dB, yet BOCPD continues
to drop as low as 0.56 dB. The RMSE from BOCPD does not spike nearly as high as the other
methods upon a change in E[Rn ]. The ?-filter and median filter appear worse than both the IMM
and BOCPD. The RMSE and confidence intervals are calculated from 5000 runs of the experiment.
7
45
80
40
30
Northing (km)
Improvement (%)
35
25
20
15
10
5
60
40
20
0
0
?5
1
2
3
4
?20
5
Difficulty
0
20
40
60
80
100
Easting (km)
(a) SIAP Metrics
(b) Heathrow (LHR)
Figure 3: Left: Average relative improvements (%) for SIAP metrics: position accuracy (red 4), velocity
accuracy (green ), and spurious tracks (blue ?) across difficulty levels. Right: LHR: true trajectories shown
as black lines (?), estimates using a BOCPD RCS model for association shown as blue stars (?), and the
standard tracker as red circles (?). The standard tracker has spurious tracks over east London and near Ipswich.
Background map data: Google Earth (TerraMetrics, Data SIO, NOAA, U.S. Navy, NGA, GEBCO, Europa Technologies)
3.3
Flightradar24 Tracking Problem
Finally, we used real flight trajectories from flightradar24 and plugged them into our 3D tracking
algorithm. We compare tracking performance between using our BOCPD model and the relatively
standard constant probability of detection (no RCS) [2, Sec. 3.5] setup. We use the single integrated
air picture (SIAP) metrics [6] to demonstrate the improved performance of the tracking. The SIAP
metrics are a standard set of metrics used to compare tracking systems. We broke the data into 30
regions during a one hour period (in Sept. 2012) sampled every 5 s, each within a 200 km by 200 km
area centered around the world?s 30 busiest airports [22]. Commercial airport traffic is typically very
orderly and does not allow aircraft to fly close to one another or cross paths. Feature-aided tracking
is most necessary in scenarios with a more chaotic air situation. Therefore, we took random subsets
of 10 flight paths and randomly shifted their start time to allow for scenarios of greater interest.
The resulting SIAP metric improvements are shown in Figure 3(a) where we look at performance by
a difficulty metric: the number of times in a scenario any two aircraft come within ?400 m of each
other. The biggest improvements are seen for difficulties above three where positional accuracy
increases by 30%. Significant improvements are also seen for velocity accuracy (11%) and the
frequency of spurious tracks (6%). Significant performance gains are seen at all difficulty levels
considered. The larger improvements at level three over level five are possibly due to some level five
scenarios that are not resolvable simply through more sophisticated models. We demonstrate how
our RCS methods prevent the creation of spurious tracks around London Heathrow in Figure 3(b).
4
Conclusions
We have demonstrated that it is possible to use sophisticated and recent developments in machine
learning such as BOCPD, and use the modern inference method of VB, to produce demonstrable
improvements in the much more mature field of radar tracking. We first closed a ?hole? in the
literature in Section 2.1 by deriving variational inference on the parameters of a Rice distribution,
with its inherent applicability to radar tracking. In Sections 2.2 and 2.4 we showed that it is possible
to use these variational UPMs for non-exponential family models in BOCPD without sacrificing its
modular or online nature. The improvements in online VB are extendable to UPMs besides a Rice
distribution and more generally beyond change point detection. We can use the variational lower
bound from the UPM and obtain a principled variational approximation to the run length posterior.
Furthermore, we cast the estimation of the assignment vectors themselves as a VB problem, which is
in large contrast to the tracking literature. More algorithms from the tracking literature can possibly
be cast in various machine learning frameworks, such as VB, and improved upon from there.
8
References
[1] Adams, R. P. and MacKay, D. J. (2007). Bayesian online changepoint detection. Technical report, University of Cambridge, Cambridge, UK.
[2] Bar-Shalom, Y., Willett, P., and Tian, X. (2011). Tracking and Data Fusion: A Handbook of Algorithms.
YBS Publishing.
[3] Beal, M. and Ghahramani, Z. (2003). The variational Bayesian EM algorithm for incomplete data: with
application to scoring graphical model structures. In Bayesian Statistics, volume 7, pages 453?464.
[4] Bishop, C. M. (2007). Pattern Recognition and Machine Learning. Springer.
[5] Braun, J. V., Braun, R., and M?uller, H.-G. (2000). Multiple changepoint fitting via quasilikelihood, with
application to DNA sequence segmentation. Biometrika, 87(2):301?314.
[6] Byrd, E. (2003). Single integrated air picture (SIAP) attributes version 2.0. Technical Report 2003-029,
DTIC.
[7] Chen, J. and Gupta, A. (1997). Testing and locating variance changepoints with application to stock prices.
Journal of the Americal Statistical Association, 92(438):739?747.
[8] Courant, R. and Hilbert, D. (1953). Methods of Mathematical Physics. Interscience.
[9] Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the
EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38.
[10] Ehrman, L. M. and Blair, W. D. (2006). Comparison of methods for using target amplitude to improve
measurement-to-track association in multi-target tracking. In Information Fusion, 2006 9th International
Conference on, pages 1?8. IEEE.
[11] Fearnhead, P. and Liu, Z. (2007). Online inference for multiple changepoint problems. Journal of the
Royal Statistical Society, Series B, 69(4):589?605.
[12] Hipp, C. (1974). Sufficient statistics and exponential families. The Annals of Statistics, 2(6):1283?1292.
[13] Honkela, A. and Valpola, H. (2003). On-line variational Bayesian learning. In 4th International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803?808.
[14] Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the
ASME ? Journal of Basic Engineering, 82(Series D):35?45.
[15] Lauwers, L., Barb?e, K., Van Moer, W., and Pintelon, R. (2009). Estimating the parameters of a Rice
distribution: A Bayesian approach. In Instrumentation and Measurement Technology Conference, 2009.
I2MTC?09. IEEE, pages 114?117. IEEE.
[16] Mahler, R. (2003). Multi-target Bayes filtering via first-order multi-target moments. IEEE Trans. AES,
39(4):1152?1178.
[17] Marcum, J. (1950). Table of Q functions. U.S. Air Force RAND Research Memorandum M-339, Rand
Corporation, Santa Monica, CA.
[18] Mardia, K. V. and Jupp, P. E. (2000). Directional Statistics. John Wiley & Sons, New York.
[19] Murray, I. (2007). Advances in Markov chain Monte Carlo methods. PhD thesis, Gatsby computational
neuroscience unit, University College London, London, UK.
[20] Poore, A. P., Rijavec, N., Barker, T. N., and Munger, M. L. (1993). Data association problems posed as
multidimensional assignment problems: algorithm development. In Optical Engineering and Photonics in
Aerospace Sensing, pages 172?182. International Society for Optics and Photonics.
[21] Richards, M. A., Scheer, J., and Holm, W. A., editors (2010). Principles of Modern Radar: Basic Principles. SciTech Pub.
[22] Rogers, S. (2012). The world?s top 100 airports: listed, ranked and mapped. The Guardian.
[23] Saatc?i, Y., Turner, R., and Rasmussen, C. E. (2010). Gaussian process change point models. In 27th
International Conference on Machine Learning, pages 927?934, Haifa, Israel. Omnipress.
[24] Sato, M.-A. (2001). Online model selection based on the variational Bayes. Neural Computation,
13(7):1649?1681.
[25] Slocumb, B. J. and Klusman III, M. E. (2005). A multiple model SNR/RCS likelihood ratio score for
radar-based feature-aided tracking. In Optics & Photonics 2005, pages 59131N?59131N. International
Society for Optics and Photonics.
[26] Swerling, P. (1954). Probability of detection for fluctuating targets. Technical Report RM-1217, Rand
Corporation.
[27] Turner, R. (2011). Gaussian Processes for State Space Models and Change Point Detection. PhD thesis,
University of Cambridge, Cambridge, UK.
9
| 5124 |@word aircraft:8 version:1 briefly:2 middle:2 km:4 p0:2 q1:1 incurs:1 mention:1 moment:2 liu:1 series:6 contains:2 pub:1 score:1 existing:2 current:2 com:3 nt:7 jupp:1 yet:2 must:6 john:1 partition:1 shape:2 analytic:1 grumman:3 treating:1 drop:2 update:14 n0:4 v:2 implying:1 generative:1 accordingly:1 inspection:1 sudden:2 provides:2 completeness:1 recompute:1 simpler:1 five:2 mathematical:1 along:1 become:1 symposium:1 fitting:1 interscience:1 manner:1 forgetting:3 expected:10 themselves:1 multi:3 automatically:1 byrd:1 window:1 solver:1 becomes:2 project:1 estimating:3 underlying:5 notation:1 mass:1 what:2 israel:1 kind:1 q2:1 developed:1 finding:2 corporation:2 certainty:1 sky:1 every:7 multidimensional:2 braun:2 exactly:3 biometrika:1 wrong:1 qm:1 uk:3 whatever:1 rm:1 unit:1 yn:16 appear:1 positive:1 engineering:2 tends:1 path:4 northrop:3 black:8 might:1 emphasis:1 bird:3 studied:1 nz:3 r4:1 co:1 limited:1 factorization:1 tian:1 phased:1 testing:1 tribution:1 chaotic:1 area:4 ri2:5 confidence:2 integrating:1 get:2 close:1 selection:1 context:1 applying:2 accumulating:2 map:8 demonstrated:1 missing:1 go:1 starting:1 upm:27 barker:1 simplicity:1 deriving:1 handle:2 memorandum:1 updated:1 qkp:2 target:12 commercial:1 annals:1 exact:9 us:4 hypothesis:1 velocity:2 approximated:1 particularly:1 updating:13 continues:1 recognition:1 richards:1 observed:2 steven:2 fly:1 electrical:1 solved:1 busiest:1 region:2 connected:1 ordering:1 substantial:1 mentioned:1 principled:1 dempster:1 nats:3 radar:16 motivate:1 depend:1 trained:1 solving:1 predictive:14 creation:1 upon:5 necessitates:1 easily:2 joint:5 differently:1 stock:1 various:2 derivation:1 distinct:1 london:4 monte:1 query:1 detected:1 corresponded:1 hyper:2 outside:1 navy:1 whose:2 modular:2 apparent:1 larger:1 posed:1 otherwise:1 statistic:16 jointly:1 noisy:1 transform:1 itself:1 online:31 beal:1 laird:1 advantage:1 sequence:4 took:1 product:6 trigonometric:1 description:1 convergence:1 produce:3 generating:1 adam:1 object:1 help:2 derive:5 develop:3 demonstrable:1 measured:1 eq:6 auxiliary:1 involves:1 come:2 blair:1 convention:1 radius:1 correct:3 attribute:1 filter:14 centered:1 broke:1 enable:1 settle:1 rogers:1 implementing:2 require:4 ryan:2 adjusted:2 extension:1 correction:1 around:4 considered:1 ground:1 normal:8 exp:6 great:1 tracker:2 scope:1 mapping:1 changepoint:3 earth:1 estimation:7 encapsulated:1 applicable:2 label:1 vice:1 create:2 uller:1 sensor:1 gaussian:5 fearnhead:1 modified:1 pn:3 derived:1 improvement:11 likelihood:15 contrast:2 normalizer:1 posteriori:1 inference:14 i0:5 inaccurate:1 typically:3 integrated:2 spurious:5 expand:1 i1:1 priori:1 exponent:1 development:2 arccos:1 airport:3 mackay:1 marginal:4 genuine:1 construct:1 once:1 field:1 bmc:2 represents:2 look:3 constitutes:2 nearly:1 np:1 report:3 piecewise:1 inherent:1 alley:1 randomly:1 modern:2 gamma:13 divergence:2 argmax:2 n1:2 attempt:1 detection:16 interest:1 message:2 possibility:1 highly:1 kinematic:5 adjust:1 photonics:4 mixture:1 light:1 behind:1 chain:1 accurate:2 necessary:1 censored:5 modest:1 unless:1 incomplete:2 plugged:2 old:2 re:1 plotted:1 circle:1 sacrificing:1 haifa:1 modeling:1 cover:1 zn:4 assignment:16 ordinary:1 cost:3 introducing:1 applicability:1 subset:1 snr:9 uniform:1 usefulness:1 ri1:1 predictor:1 answer:1 ngc:3 extendable:1 density:1 international:5 stay:1 probabilistic:2 physic:2 vm:2 barb:1 monica:1 von:1 thesis:2 opposed:1 possibly:2 dr:1 worse:2 return:5 li:4 account:1 potential:1 lookup:2 star:3 summarized:1 sec:4 includes:1 siap:6 blind:1 performed:1 break:1 closed:1 doing:1 traffic:1 red:7 start:1 bayes:5 maintains:1 option:2 rmse:6 contribution:3 ass:1 square:1 air:4 accuracy:7 variance:3 characteristic:1 likewise:1 efficiently:2 unitless:1 directional:1 bayesian:11 iid:1 carlo:1 trajectory:2 za:3 reach:1 against:3 frequency:1 involved:1 obvious:1 associated:5 mi:1 sampled:4 gain:1 recall:2 knowledge:1 credible:1 segmentation:1 clay:2 schedule:3 routine:1 sophisticated:2 noaa:1 hilbert:1 amplitude:1 exceeded:1 higher:1 courant:1 follow:2 methodology:1 improved:5 rand:3 yb:1 done:2 box:3 furthermore:3 just:1 honkela:1 flight:2 nonlinear:1 google:1 saatc:1 perhaps:1 effect:2 contain:1 true:5 requiring:2 unbiased:1 leibler:1 illustrated:1 during:2 maintained:1 illustrative:1 lhr:2 asme:1 complete:6 demonstrate:4 tn:2 performs:1 omnipress:1 meaning:1 variational:27 novel:1 physical:1 volume:1 association:19 extend:1 willett:1 measurement:33 significant:2 versa:1 cambridge:4 ai:1 tuning:1 uv:1 trivially:1 grid:2 flock:1 dot:1 base:1 posterior:53 closest:1 recent:2 showed:1 optimizing:2 shalom:1 instrumentation:1 scenario:5 corp:3 certain:3 yi:13 scoring:1 seen:3 additional:1 greater:1 floor:1 r0:8 determine:1 period:1 recommended:1 bessel:1 signal:3 sliding:1 multiple:6 dashed:1 technical:5 match:1 plug:1 cross:3 believed:1 hazard:2 rcs:38 calculation:1 mle:2 munger:1 coded:1 ellipsis:1 prediction:12 basic:2 expectation:1 poisson:1 metric:7 represent:3 c1:1 whereas:1 background:4 addition:2 interval:2 annealing:3 completes:1 median:3 leaving:1 db:5 mature:1 call:1 near:2 constraining:1 iii:2 easy:2 independence:1 zi:4 reduce:1 inner:4 americal:1 locating:1 passing:1 cause:1 york:1 generally:1 detailed:1 informally:1 clear:1 santa:1 listed:1 clutter:8 locally:1 dna:1 reduced:2 canonical:1 zj:2 shifted:1 neuroscience:1 track:25 bocpd:36 per:1 blue:7 key:1 four:2 conn:1 demonstrating:1 threshold:1 prevent:1 ce:5 excludes:1 merely:4 year:1 sum:1 nga:1 run:17 angle:2 inverse:2 uncertainty:1 parameterized:1 place:1 family:22 missed:3 separation:1 vb:44 bound:8 distinguish:2 deviated:1 mda:1 sato:4 optic:3 constraint:3 constrain:4 x2:4 ri:6 argument:1 performing:1 optical:1 relatively:2 alternate:1 conjugate:7 smaller:1 across:1 em:11 son:1 newer:1 making:1 indexing:1 taken:1 computationally:1 equation:3 r3:1 tractable:1 end:1 serf:1 gaussians:1 operation:1 changepoints:1 apply:3 observe:1 fluctuating:1 batch:4 original:1 top:1 include:1 publishing:1 graphical:1 calculating:1 ghahramani:1 build:1 murray:1 approximating:2 classical:1 suddenly:1 society:4 occurs:1 spike:1 parametric:1 gradient:1 detrimental:1 valpola:1 mapped:1 yn2:1 aes:1 length:16 kalman:2 modeled:1 index:1 besides:1 ratio:2 providing:1 holm:1 nc:4 setup:3 equivalently:1 difficult:1 negative:2 implementation:1 motivates:1 unknown:1 perform:2 observation:8 markov:1 finite:1 situation:2 looking:1 y1:22 rn:32 interacting:1 community:1 intensity:1 cast:3 required:4 doppler:2 kl:8 z1:3 optimized:1 aerospace:1 hour:1 trans:1 beyond:2 able:1 bar:1 usually:2 below:1 pattern:1 bottone:2 regime:3 summarize:1 challenge:1 including:1 memory:2 green:3 royal:2 natural:3 treated:1 difficulty:5 force:1 ranked:1 turner:4 older:1 improve:3 technology:2 picture:2 naive:4 sept:1 prior:16 review:2 literature:3 relative:1 embedded:1 fully:1 expect:1 filtering:5 facing:1 integrate:1 sufficient:13 proxy:2 rubin:1 principle:2 editor:1 pi:8 share:1 censoring:1 accounted:1 last:5 rasmussen:1 bias:1 weaker:1 allow:2 emb:1 taking:2 sparse:1 distributed:5 van:1 curve:3 calculated:3 world:3 computes:1 author:1 jump:1 simplified:1 far:1 transaction:1 approximate:2 pruning:1 restarting:1 kullback:1 orderly:1 imm:6 hipp:2 handbook:1 assumed:3 unnecessary:1 knew:2 factorize:1 xi:20 continuous:2 latent:7 search:1 table:3 additionally:1 nature:2 ca:1 improving:2 did:1 main:1 motivation:1 noise:3 nothing:1 x1:12 augmented:4 biggest:1 benchmarking:2 gatsby:1 wiley:1 inferring:2 position:5 exponential:27 mardia:1 formula:1 magenta:1 resolvable:1 bishop:2 sensing:1 r2:10 gupta:1 evidence:1 bivariate:1 intractable:2 fusion:2 sequential:1 effectively:2 jpda:1 ci:1 phd:3 nat:1 hole:1 dtic:1 chen:1 marcum:2 forget:1 simply:1 positional:2 tracking:27 springer:1 ch:4 corresponds:1 rice:36 conditional:3 identity:2 formulated:1 price:1 change:34 aided:4 hard:2 determined:1 infinite:1 uniformly:1 typical:1 total:1 experimental:1 east:1 sio:1 college:1 scan:1 scratch:1 |
4,559 | 5,125 | q-OCSVM: A q-Quantile Estimator for
High-Dimensional Distributions
Assaf Glazer
Michael Lindenbaum
Shaul Markovitch
Department of Computer Science, Technion - Israel Institute of Technology
{assafgr,mic,shaulm}@cs.technion.ac.il
Abstract
In this paper we introduce a novel method that can efficiently estimate a family of
hierarchical dense sets in high-dimensional distributions. Our method can be regarded as a natural extension of the one-class SVM (OCSVM) algorithm that finds
multiple parallel separating hyperplanes in a reproducing kernel Hilbert space.
We call our method q-OCSVM, as it can be used to estimate q quantiles of a highdimensional distribution. For this purpose, we introduce a new global convex
optimization program that finds all estimated sets at once and show that it can be
solved efficiently. We prove the correctness of our method and present empirical
results that demonstrate its superiority over existing methods.
1
Introduction
One-class SVM (OCSVM) [14] is a kernel-based learning algorithm that is often considered to be
the method of choice for set estimation in high-dimensional data due to its generalization power,
efficiency, and nonparametric nature. Let X be a training set of examples sampled i.i.d. from a
continuous distribution F with Lebesgue density f in Rd . The OCSVM algorithm takes X and a
parameter 0 < ? < 1, and returns a subset of the input space with a small volume while bounding a
? portion of examples in X outside the subset. Asymptotically, the probability mass of the returned
subset converges to ? = 1 ? ?. Furthermore, when a Gaussian kernel with a zero tending bandwidth
is used, the solution also converges to the minimum-volume set (MV-set) at level ? [19], which is a
subset of the input space with the smallest volume and probability mass of at least ?.
In light of the above properties, the popularity of the OCSVM algorithm is not surprising. It appears,
however, that in some applications we are not actually interested in estimating a single MV-set but
in estimating multiple hierarchical MV-sets, which reveal more information about the distribution.
For instance, in cluster analysis [5], we are interested in learning hierarchical MV-sets to construct a
cluster tree of the distribution. In outlier detection [6], hierarchical MV-sets can be used to classify
examples as outliers at different levels of significance. In statistical tests, hierarchical MV-sets are
used for generalizing univariate tests to high-dimensional data [12, 4]. We are thus interested in a
method that generalizes the OCSVM algorithm for approximating hierarchical MV-sets. By doing
so we would leverage the advantages of the OCSVM algorithm in high-dimensional data and take it
a step forward by extending its solution for a broader range of applications.
Unfortunately, a straightforward approach of training a set of OCSVMs, one for each MV-set, would
not necessarily satisfy the hierarchy requirement. Let q be the number of hierarchical MV-sets
we would like to approximate. A naive approach would be to train q OCSVMs independently and
enforce hierarchy by intersection operations on the resulting sets. However, we find two major drawbacks in this approach: (1) The ?-property of the OCSVM algorithm, which provides us with bounds
on the number of examples in X lying outside or on the boundary of each set, is no longer guaranteed
due to the intersection operator; (2) MV-sets of a distribution, which are also level sets of the distribution?s density f (under sufficient regularity conditions), are hierarchical by definition. Hence,
1
by learning q OCSVMs independently, we ignore an important property of the correct solution, and
thus are less likely reach a generalized global solution.
In this paper we introduce a generalized version of the OCSVM algorithm for approximating hierarchical MV-sets in high-dimensional distributions. As in the naive approach, approximated MV-sets
in our method are represented as dense sets captured by separating hyperplanes in a reproducing kernel Hilbert space. However, our method does not suffer from the two drawbacks mentioned above.
To preserve the ?-property of the solution while fulfilling the hierarchy constraint, we require the
resulting hyperplanes to be parallel to one another. To provide a generalized global solution, we
introduce a new convex optimization program that finds all approximated MV-sets at once. Furthermore, we expect our method to have better generalization
ability due to the parallelism constraint
Q=4,N=100
imposed on the hyperplanes, which also acts as a regularization term on the solution.
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Figure 1: An approximation of 4 hierarchical MV-sets
We call our method q-OCSVM, as it can be used by statisticians to generalize q-quantiles to highdimensional distributions. Figure 1 shows an example of 4-quantiles estimated for two-dimensional
data. We show that our method can be solved efficiently, and provide theoretical results showing
that it preserves both the density assumption for each approximated set in the same sense suggested
by [14]. In addition, we empirically compare our method to existing methods on a variety of real
high-dimensional data and show its advantages in the examined domains.
2
Background
In one-dimensional settings, q-quantiles, which are points dividing a cumulative distribution function (CDF) into equal-sized subsets, are widely used to understand the distribution of values. These
points are well defined as the inverse of the CDF, that is, the quantile function. It would be useful to
have the same representation of q-quantiles in high-dimensional settings. However, it appears that
generalizing quantile functions beyond one dimension is hard since the number of ways to define
them grows exponentially with the dimensions [3]. Furthermore, while various quantile regression
methods [7, 16, 9] can be to used to estimate a single quantile of a high-dimensional distribution,
extensions of those to estimate q-quantiles is mostly non-trivial.
Let us first understand the exponential complexity involved in estimating a generalized quantile
function in high-dimensional data. Let 0 < ?1 < ?2 , . . . , < ?q < 1 be a sequence of equallyspaced q quantiles. When d = 1, the quantile transforms F ?1 (?j ) are uniquely defined as the points
xj ? R satisfying F (X ? xj ) ? ?j , where X is a random variable drawn from F . Equivalently,
F ?1 (?j ) can be identified with the unique hierarchical intervals [??, xj ]. However, when d > 1,
intervals are replaced by sets C1 ? C2 . . . ?, Cq that satisfy F (Cj ) = ?j but are not uniquely
defined. Assume for a moment that these sets are defined only by imposing directions on d ? 1
dimensions (the direction of the first dimension can be chosen arbitrarily). Hence, we are left with
2d?1 possible ways of defining a generalized quantile function for the data.
Hypothetically, any arbitrary hierarchical sets satisfying F (Cj ) = ?j can be used to define a valid
generalized quantile function. Nevertheless, we would like the distribution to be dense in these
sets so that the estimation will be informative enough. Motivated in this direction, Polonik [12]
suggested using hierarchical MV-sets to generalize quantile functions. Let C(?) be the MV-set at
level ? with respect to F and the Lebesgue density f . Let Lf (c) = {x : f (x) ? c} be the level set
2
of f at level c. Polonik observed that, under sufficient regularity conditions on f , Lf (c) is an MV-set
of F at level ? = F (Lf (c)). He thus suggested that level sets can be used as approximations of
the MV-sets of a distribution. Since level sets are hierarchical by nature, a density estimator over X
would be sufficient to construct a generalized quantile function.
Polonik?s work was largely theoretical. In high-dimensional data, not only is the density estimation
hard, but extracting level sets of the estimated density is also not always feasible. Furthermore,
in high-dimensional settings, even attempting to estimate q hierarchical MV-sets of a distribution
might be too optimistic an objective due to the exponential growth in the search space, which may
lead to overfitted estimates, especially when the sample is relatively small. Consequently, various
methods were proposed for estimating q-quantiles in multivariate settings without an intermediate
density estimation step [3, 21, 2, 20]. However, these methods were usually efficient only up to a
few dimensions. For a detailed discussion about generalized quantile functions, see Serfling [15].
One prominent method that uses a variant of the OCSVM algorithm for approximating level sets of a
distribution was proposed by Lee and Scott [8]. Their method is called nested OCSVM (NOC-SVM)
and it is based on a new quadratic program that simultaneously finds a global solution of multiple
nested half-space decision functions. An efficient decomposition method is introduced to solve
this program for large-scale problems. This program uses the C-SVM formulation of the OCSVM
algorithm [18], where ? is replaced by a different parameter, C ? 0, and incorporates nesting
constraints into the dual quadratic program of each approximated function. However, due to these
difference formulations, our method converges to predefined q-quantiles of a distribution while theirs
converges to approximated sets with unpredicted probability masses. The probability masses in their
solution are even less trackable because the constraints imposed by the NOC-SVM program on the
dual variables changes the geometric interpretation of the primal variables in a non-intuitive way.
An improved quantile regression variant of the OCSVM algorithm that also uses ?non-crossing?
constraints to estimate ?non-crossing? quantiles of a distribution was proposed by Takeuchi et al.
[17]. However, similar to the NOC-SVM method, after enforcing these constraints, the ?-property
of the solution is no longer guaranteed.
Recently, a greedy hierarchical MV-set estimator (HMVE) that uses OCSVMs as a basic component
was introduced by Glazer et al. [4]. This method approximates the MV-sets iteratively by training a sequence of OCSVMs, from the largest to the smallest sets. The superiority of HMVE was
shown over a density-based estimation method and over a different hierarchical MV-set estimator
that was also introduced in that paper and is based on the one-class neighbor machine (OCNM) algorithm [11]. However, as we shall see in experiments, it appears that approximations in this greedy
approach tend to become less accurate as the required number of MV-sets increases, especially for
approximated MV-sets with small ? in the last iterations.
In contrast to the naive approach of training q OCSVMs independently 1 , our q-OCSVM estimator
preserves the ?-property of the solution and converges to a generalized global solution. In contrast
to the NOC-SVM algorithm, q-OCSVM converges to predefined q-quantiles of a distribution. In
contrast to the HMVE estimator, q-OCSVM provides global and stable solutions. As will be seen,
we support these advantages of our method in theoretical and empirical analysis.
The q-OCSVM Estimator
3
In the following we introduce our q-OCSVM method, which generalizes the OCSVM algorithm so
that its advantages can be applied to a broader range of applications. q stands for the number of
MV-sets we would like our method to approximate.
Let X = {x1 , . . . , xn } be a set of feature vectors sampled i.i.d. with respect to F . Consider a
function ? : Rd ? F mapping the feature vectors in X to a hypersphere in an infinite Hilbert space
F. Let H be a hypothesis space of half-space decision functions fC (x) = sgn ((w ? ?(x)) ? ?)
such that fC (x) = +1 if x ? C, and ?1 otherwise. The OCSVM algorithm returns a function
fC ? H that maximizes the margin between the half-space decision boundary and the origin in F,
while bounding a portion of examples in X satisfying fC (x) = ?1. This bound is predefined by a
parameter 0 < ? < 1, and it is also called the ?-property of the OCSVM algorithm. This function is
1
In the following we call this method I-OCSVM (independent one-class SVMs).
3
specified by the solution of this quadratic program:
1
1 X
minn
||w||2 ? ? +
?i , s.t. (w ? ? (xi )) ? ? ? ?i , ?i ? 0,
w?F ,??R ,??R 2
?n i
(1)
where ? is a vector of the slack variables. All training examples xi for which (w ? ?(x)) ? ? ? 0 are
called support vectors (SVs). Outliers are referred to as examples that strictly satisfy (w ? ?(x)) ?
? < 0. By solving the program for ? = 1 ? ?, we can use the OCSVM to approximate C(?).
Let 0 < ?1 < ?2 , . . . , < ?q < 1 be a sequence of q quantiles. Our goal is to generalize the OCSVM
algorithm for approximating a set of MV-sets {C1 , . . . , Cq } such that a hierarchy constraint Ci ? Cj
is satisfied for i < j. Given X , our q-OCSVM algorithm solves this primal program:
q
q
X
X
q
1 X
2
?j,i
min ||w|| ?
?j +
w,?j ,?j 2
? n i
j=1
j=1 j
(2)
s.t. (w ? ? (xi )) ? ?j ? ?j,i , ?j,i ? 0, j ? [q], i ? [n],
where ?j = 1 ? ?j . This program generalizes Equation (1) to the case of finding multiple, parallel
half-space decision functions by searching for a global minimum over their sum of objective functions: the coupling between q half-spaces is done by summing q OCSVM programs, while enforcing
these programs to share the same w. As a result, the q half-spaces in the solution of Equation (2) are
different only by their bias terms, and thus parallel to each other. This program is convex, and thus
a global minimum can be found in polynomial time.
It is important to note that even with an ideal, unbounded number of examples, this program does
not necessarily converge to the exact MV-sets but to approximated MV-sets of a distribution. As
we shall see in Section 4, all decision functions returned by this program preserve the ?-property.
We argue that the stability of these approximated MV-sets benefits from the parallelism constraint
imposed on the half-spaces in H, which acts as a regularizer.
In the following we show that our program can be solved efficiently in its dual form. Using multipliers ?j,i ? 0, ?j,i ? 0, the Lagrangian of this program is
L (w, ?q , ?1 , . . . , ?q , ?, ?) =
q
q
X
X
1 X
q
||w||2 ?
?j +
?j,i
2
? n i
j=1
j=1 j
?
q X
X
j=1
?j,i ((? (xi ) ? wj ) ? ?j + ?j,i ) ?
i
q X
X
j=1
(3)
?j,i ?j,i .
i
Setting the derivatives to be equal to zero with respect to the primal variables w, ?j , ?j yields
w=
X
1X
1
?j,i ?(xi ),
?j,i = 1, 0 ? ?ji ?
, i ? [n], j ? [q].
q j,i
n?
j
i
(4)
Substituting Equation (4) into Equation (3), and replacing the dot product (?(xi ) ? ?(xs ))F with a
kernel function k (xi , xs ) 2 , we obtain the dual program
X
1 X X
1
min
?j,i ?p,s k (xi , xs ), s.t.
?j,i = 1, 0 ? ?ji ?
, i ? [n], j ? [q]. (5)
? 2q
n?
j
i
j,p?[q] i,s?[n]
Similar to the formulation of the dual objective function in the original OCSVM algorithm, our dual
program depends only on the ? multipliers, and hence can be solved more efficiently than the primal
one. The resulting decision function for j?th estimate is
!
1X ?
fCj (x) = sgn
? k (xi , x) ? ?j ,
(6)
q i i
2
2
A Gaussian kernel function k(xi , xs ) = e??||xi ?xs || was used in the following.
4
Pq
where ?i? = j=1 ?j,i . This efficient formulation of the decision function, which derives from the
fact that parallel half-spaces share the same w, allows us to compute the outputs of all the q decision
functions simultaneously.
As in the OCSVM algorithm, ?j are recovered by identifying points ? (xj,i ) lying strictly on the
j?th decision boundary. These points are identified using the condition 0 < ?j,i < n?1 j . Therefore,
?j can be recovered from a point sv satisfying this condition by
1X ?
?j = (w ? ? (sv)) =
? k (xi , sv).
(7)
q i i
Figure 1 shows the resulting estimates of our q-OCSVM method for 4 hierarchical MV-sets with
? = 0.2, 0.4, 0.6, 0.8 3 . 100 train examples drawn i.i.d. from a bimodal distribution are marked
with black dots. It can be seen that the number of bounded SVs (outliers) at each level is no higher
than 100(1 ? ?j ), as expected according to the properties of our q-OCSVM estimator, which will be
proven in the following section.
Properties of the q-OCSVM Estimator
4
In this section we provide theoretical results for the q-OCSVM estimator. The program we solve is
different from the one in Equation (1). Hence, we can not rely on the properties of OCSVM to prove
the properties of our method. We provide instead similar proofs, in the spirit of Sch?olkopf et al. [14]
and Glazer et al. [4], with some additional required extensions.
Definition 1. A set X = {x1 , . . . , xn } is separable if there exists some w such that (?(xi ) ? w) > 0
for all i ? {1, . . . , n}.
Note that if a Gaussian kernel is used (implies k(xi , xs ) > 0), as in our case, then X is separable.
Theorem 1. If X is separable, then a feasible solution exists for Equation (2) with ?j > 0 for all
j ? {1, . . . , q}.
Proof. Define M as the convex hull of ?(x1 ), ? ? ? , ?(xn ). Note that since X is separable, M does
not contain the origin. Then, by the supporting hyperplane theorem [10], there exists a hyperplane
?
?
(?(xi ) ? w)?? that contains M on one side of it and does not contain the origin. Hence, 0 ? w ?
? < 0, which leads to ? > 0. Note that the solution ?j = ? for all j ? [q] is a feasible solution for
Equation (2).
The following theorem shows that the regions specified by the decision functions fC1 , . . . , fCq are
(a) approximations for the MV-sets in the same sense suggested by Sch?olkopf et al., and (b) hierarchically nested.
Theorem 2. Let fC1 , . . . , fCq be the decision functions returned by the q-OCSVM estimator with
parameters {?1 , . . . , ?q }, X , k (?, ?). Assume X is separable. Let SVoj be the set of SVs lying
strictly outside Cj , and SVbj be the set of SVs lying exactly on the boundary of Cj . Then, the
|SVo |
|SVb |+|SVo |
j
j
following statements hold:(1) Cj ? Ck for ?j < ?k . (2) |X |j ? 1 ? ?j ?
. (3)
|X |
Suppose X is i.i.d. drawn from a distribution F which does not contain discrete components, and
|SVo |
k (?, ?) is analytic and non-constant. Then, |X |j is asymptotically equal to 1 ? ?j .
Proof. Cj and Ck are associated with two parallel half-spaces in H with the same w. Therefore,
statement (1) can be proven by showing that ?j ? ?k . ?j < ?k leads to ?j ? ?k since otherwise
the optimality of Equation (2) would be contradicted. Assume by negation that ?j = 1 ? ?j >
|SVbj |+|SVoj |
|X |
for some j ? [q] in the optimal solution of Equation (2). Note that when parallelP
shifting the optimal hyperplane by slightly increasing ?j , the term i ?j,i in the equation will change
proportionally to |SVbj | + |SVoj |. However, since
3
Detailed setup parameters are discussed in Section 5.
5
|SVbj |+|SVoj |
|X |?j
< 1, a slight increase in ?j will
result in a decrease in the objective function, which contradicts the optimality of the hyperplane.
|SVo |
The same goes for the other direction: Assume by negation that |X |j > 1 ? ?j for some j ? [q]
in the optimal solution of Equation (2). Then, a slight decrease in ?j will result in a decrease in
the objective function, which contradicts the optimality of the hyperplane. We are now left to prove
statement (3): The covering number of the class of fCj functions (which are induced by k) is wellbehaved. Hence, asymptotically, the probability of points lying exactly on the hyperplanes converges
to zero (cf. 13).
5
Empirical Results
We extensively evaluated the effectiveness of our q-OCSVM method on a variety of real highdimensional data from the UCI repository and the 20-Newsgroup document corpus, and compared
its performance to competing methods.
5.1
Experiments on the UCI Repository
We first evaluated our method on datasets taken from the UCI repository 4 . From each examined
dataset, a random set of 100 examples from the most frequent label was used as the training set X .
The remaining examples from the same label were used as the test set. We used all UCI datasets
with more than 50 test examples ? a total of 61 data sets. The average number of features for a
dataset is 113 5 .
We compared the performance of our q-OCSVM method to three alternative methods that generalize
the OCSVM algorithm: HMVE (hierarchical minimum-volume estimator) [4], I-OCSVM (independent one-class SVMs), and NOC-SVM (nested one-class SVM) [8]. For the NOC-SVM method, we
used the implementation provided by the authors 6 . The LibSVM package [1] was used to implement
the HMVE and I-OCSVM methods. An implementation of our q-OCSVM estimator is available from:
http://www.cs.technion.ac.il/?assafgr/articles/q-ocsvm.html. All ex2.5
periments were carried out with a Gaussian kernel (? = 2?1 2 = #f eatures
).
For each data set, we trained the reference methods to approximate hierarchical MV-sets at levels
?1 = 0.05, ?2 = 0.1 . . . , ?19 = 0.95 (19-quantiles) 7 . Then, we evaluated the estimated q-quantiles
with the test set. Since the correct MV-sets are not known for the data, the quality of the approximated MV-sets was evaluated by the coverage ratio (CR): Let ?0 be the empirical proportion of the
approximated MV-sets that was measured with the test data. The expected proportion of examples
0
that lie within the MV-set C(?) is ?. The coverage ratio is defined as ?? . A perfect MV-set approximation method would yield a coverage ratio of 1.0 for all approximated MV-sets 8 . An advantage
of choosing this measure for evaluation is that it gives more weight for differences between ? and
?0 in small quantiles associated with regions of high probability mass.
Results on test data for each approximated MV-set are shown in Figure 2. The left graph displays in
bars the empirical proportion of test examples in the approximated MV-sets (?0 ) as a function of the
expected proportion (?) averaged over all 61 data sets. The right graph displays the coverage ratio
of test examples as a function of ? averaged over all 61 data sets. It can be seen that our q-OCSVM
method dominates the others with the best average ?0 and average coverage ratio behaviors. In each
quantile separately, we tested the significance of the advantage of q-OCSVM over the competitors
using the Wilcoxon statistical test over the absolute difference between the expected and empirical
coverage ratios (|1.0 ? CR|). The superiority of our method against the three competitors was found
significant, with P < 0.01, for each of the 19 quantiles separately.
The I-OCSVM method shows inferior performance to that of q-OCSVM. We ascribe this behavior to
the fact that it trains q OCSVMs independently, and thus reaches a local solution. Furthermore, we
4
archive.ics.uci.edu/ml/datasets.html
Nominal features were transformed into numeric ones using binary encoding; missing values were replaced
by their features? average values.
6
http://web.eecs.umich.edu/?cscott
7
The equivalent C (?) parameters of the NOC-SVM were initialized as suggested by the authors.
8
In outlier detection, this measure reflects the ratio between expected and empirical false alarm rates.
5
6
believe that by ignoring the fundamental hierarchical structure of MV-sets, the I-OCSVM method is
more likely than ours to reach an overfitted solution.
The HMVE method shows a decrease in performance from the largest to the smallest ?. We assume
this is due to the greedy nature of this method. HMVE approximates the MV-sets iteratively by
training a sequence of OCSVMs, from the largest to the smallest ? . OCSVMs trained later in the
sequence are thus more constrained in their approximations by solutions from previous iterations,
so that the error in approximations accumulates over time. This is in contrast to q-OCSVM, which
converges to a global minimum, and hence is more scalable than HMVE with respect to the number
of approximated MV-sets (q). The NOC-SVM method performs poorly in comparison to the other
methods. This is not surprising, since, unlike the other methods, we cannot set the parameters of
NOC-SVM to converge to predefined q-quantiles.
UCI?test: Q=19
20News?test: Q=19,N=824.7213
0.9
1
q?OCSVM
HMVE
I?OCSVM
NOC?SVM
0.8
0.7
0.9
0.8
0.7
coverage ratio (CR)
0.6
??
0.5
0.4
0.3
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
1
q?OCSVM
HMVE
I?OCSVM
NOC?SVM
0
0.1
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
1
Figure 2: The q-OCSVM, HMVE, I-OCSVM, and NOC-SVM methods were trained to estimate 19-quantiles
for the distribution of the most frequent label on the 61 UCI datasets. Left: ?0 as a function of ? averaged over
all datasets. Right: The coverage ratio as a function of ? averaged over all datasets.
Interestingly, the solutions produced by the HMVE and I-OCSVM methods for the largest approximated MV-set (associated with ?19 = 0.95) are equal to the solution of a single OCSVM algorithm
trained with ? = 1 ? ?19 = 0.05. This equality derives from the definition of the HMVE and
I-OCSVM methods. Therefore, in this setup, we claim that q-OCSVM also outperforms the OCSVM
algorithm in the approximation of a single MV-set, and it does so with an average coverage ratio
of 0.871 versus 0.821. We believe this improved performance is due to the parallelism constraint
imposed by the q-OCSVM method on the hyperplanes, which acts as a regularization term on the
solution. This observation is an interesting research direction to address in our future studies.
In terms of runtime complexity, our q-OCSVM method has higher computational complexity than
HMVE and I-OCSVM, because we solve a global optimization problem rather than a series of smaller
localized subproblems. However, with regard to the runtime complexity on test samples, our method
is more efficient than HMVE and I-OCSVM by a factor of q, since the distances from each half-space
only differ by their bias terms (?j ).
With regard to the choice of the Gaussian kernel width, parameter tuning for one-class classifiers,
in particular for OCSVMs, is an ongoing research area. Unlike binary classification tasks, negative
examples are not available to estimate the optimality of the solution. Consequently, we employed
a common practice [1] of using a fixed width, divided by the number of features. However, in
future studies, it would be interesting to consider alternative optimization criteria to allow tuning
parameters with a cross-validation. For instance, using the average coverage ratio over all quantiles
as an optimality criterion.
5.2
Experiments on Text Data
We evaluated our method on an additional setup of high-dimensional text data. We used the 20Newsgroup document corpus 9 . 500 words with the highest frequency count were picked to generate
9
The 20-Newsgroup corpus is at http://people.csail.mit.edu/jrennie/20Newsgroups.
7
20Newsgroups?test: Q=19
20News?test: Q=19,N=842.3
0.7
0.7
q?OCSVM
HMVE
I?OCSVM
0.6
0.65
0.6
0.5
coverage ratio (CR)
0.55
??
0.4
0.3
0.5
0.45
0.4
0.35
0.2
0.3
0.1
0
q?OCSVM
HMVE
I?OCSVM
0.25
0.2
0
0.1
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
1
Figure 3: The q-OCSVM, HMVE, and I-OCSVM methods were trained to estimate 19 quantiles for the distribution of the 20 categories in the 20-Newsgroup document corpus. Left: ?0 as a function of ? averaged over
all 20 categories. Right: The coverage ratio as a function of ? averaged over all 20 categories.
500 bag-of-words features. We use the sorted-by-date version of the corpus with 18846 documents
associated with 20 news categories. From this series of documents, the first 100 documents from
each category were used as the training set X . The subsequent documents from the same category
were used as the test set. We trained the reference methods with X to estimate 19-quantiles of a
distribution, and evaluated the estimated q-quantiles with the test set.
Results on test data for each approximated MV-set are shown in Figure 3 in the same manner as in
Figure 2 10 . Unlike the experiments on the UCI repository, results in these experiments are not so
close to the optimum, but still can provide useful information about the distributions. Again, our qOCSVM method dominates the others with the best average ?0 and average coverage ratio behaviors.
According to the Wilcoxon statistical test with P < 0.01, our method performs significantly better
than the other competitors for each of the 19 quantiles separately.
It can be seen that the differences in coverage ratios between q-OCSVM and I-OCSVM in the largest
quantile (associated with ?19 = 0.95) are relatively high, where the average coverage ratio for
q-OCSVM is 0.555, and 0.452 for I-OCSVM. Recall that the solution of I-OCSVM in the largest
quantile is equal to the solution of a single OCSVM algorithm trained with ? = 0.05. These results
are aligned with our conclusions from the UCI repository experiments, that the parallelism constraint, which acts as a regularizer, may lead to improved performance even for the approximation
of a single MV-set.
6
Summary
The q-OCSVM method introduced in this paper can be regarded as a generalized OCSVM, as it
finds multiple parallel separating hyperplanes in a reproducing kernel Hilbert space. Theoretical
properties of our methods are analyzed, showing that it can be used to approximate a family of
hierarchical MV-sets while preserving the guaranteed separation properties (?-property), in the same
sense suggested by Sch?olkopf et al..
Our q-OCSVM method is empirically evaluated on a variety of high-dimensional data from the UCI
repository and the 20-Newsgroup document corpus, and its advantage is verified in this setup. We
believe that our method will benefit practitioners whose goal is to model distributions by q-quantiles
in complex settings, where density estimation is hard to apply. An interesting direction for future
research would be to evaluate our method on problems in specific domains that utilize q-quantiles for
distribution representation. These domains include cluster analysis, outlier detection, and statistical
tests.
10
Results for NOC-SVM were omitted from the graphs due to the limitation of the method in q-quantile
estimation, which results in inferior performance also in this setup.
8
References
[1] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.
[2] Yixin Chen, Xin Dang, Hanxiang Peng, and Henry L. Bart. Outlier detection with the kernelized spatial depth function. IEEE Transactions on Pattern Analysis and Machine Intelligence,
31(2):288?305, 2009.
[3] G. Fasano and A. Franceschini. A multidimensional version of the Kolmogorov-Smirnov test.
Monthly Notices of the Royal Astronomical Society, 225:155?170, 1987.
[4] A. Glazer, M. Lindenbaum, and S. Markovitch. Learning high-density regions for a generalized
Kolmogorov-Smirnov test in high-dimensional data. In NIPS, pages 737?745, 2012.
[5] John A Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., 1975.
[6] V. Hodge and J. Austin. A survey of outlier detection methodologies. Artificial Intelligence
Review, 22(2):85?126, 2004.
[7] Roger Koenker. Quantile regression. Cambridge university press, 2005.
[8] Gyemin Lee and Clayton Scott. Nested support vector machines. Signal Processing, IEEE
Transactions on, 58(3):1648?1660, 2010.
[9] Youjuan Li, Yufeng Liu, and Ji Zhu. Quantile regression in reproducing kernel hilbert spaces.
Journal of the American Statistical Association, 102(477):255?268, 2007.
[10] D.G. Luenberger and Y. Ye. Linear and Nonlinear Programming. Springer, 3rd edition, 2008.
[11] A. Munoz and J.M. Moguerza. Estimation of high-density regions using one-class neighbor
machines. In PAMI, pages 476?480, 2006.
[12] W. Polonik.
Concentration and goodness-of-fit in higher dimensions:(asymptotically)
distribution-free methods. The Annals of Statistics, 27(4):1210?1229, 1999.
[13] B. Sch?olkopf, A.J. Smola, R.C. Williamson, and P.L. Bartlett. New support vector algorithms.
Neural Computation, 12(5):1207?1245, 2000.
[14] Bernhard Sch?olkopf, John C. Platt, John C. Shawe-Taylor, Alex J. Smola, and Robert C.
Williamson. Estimating the support of a high-dimensional distribution. Neural Computation,
13(7):1443?1471, 2001.
[15] R. Serfling. Quantile functions for multivariate analysis: approaches and applications. Statistica Neerlandica, 56(2):214?232, 2002.
[16] Ingo Steinwart, Don R Hush, and Clint Scovel. A classification framework for anomaly detection. In JMLR, pages 211?232, 2005.
[17] Ichiro Takeuchi, Quoc V Le, Timothy D Sears, and Alexander J Smola. Nonparametric quantile
estimation. JMLR, 7:1231?1264, 2006.
[18] Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York,
2nd edition, 1998.
[19] R. Vert and J.P. Vert. Consistency and convergence rates of one-class svms and related algorithms. The Journal of Machine Learning Research, 7:817?854, 2006.
[20] W. Zhang, X. Lin, M.A. Cheema, Y. Zhang, and W. Wang. Quantile-based knn over multivalued objects. In ICDE, pages 16?27. IEEE, 2010.
[21] Yijun Zuo and Robert Serfling. General notions of statistical depth function. The Annals of
Statistics, 28(2):461?482, 2000.
9
| 5125 |@word repository:6 version:3 polynomial:1 proportion:4 smirnov:2 nd:1 decomposition:1 moment:1 liu:1 contains:1 series:2 document:8 ours:1 interestingly:1 outperforms:1 existing:2 recovered:2 scovel:1 surprising:2 noc:13 john:4 subsequent:1 informative:1 wellbehaved:1 analytic:1 bart:1 half:10 greedy:3 intelligence:2 hypersphere:1 provides:2 hyperplanes:7 zhang:2 unbounded:1 c2:1 become:1 prove:3 assaf:1 ex2:1 manner:1 introduce:5 peng:1 expected:5 behavior:3 dang:1 increasing:1 provided:1 estimating:5 bounded:1 maximizes:1 mass:5 israel:1 finding:1 multidimensional:1 act:4 growth:1 runtime:2 exactly:2 classifier:1 platt:1 superiority:3 local:1 encoding:1 accumulates:1 clint:1 pami:1 might:1 black:1 examined:2 range:2 averaged:6 unique:1 practice:1 implement:1 lf:3 area:1 empirical:7 significantly:1 vert:2 word:2 lindenbaum:2 cannot:1 close:1 operator:1 www:1 equivalent:1 imposed:4 lagrangian:1 missing:1 straightforward:1 go:1 independently:4 convex:4 survey:1 identifying:1 estimator:13 nesting:1 regarded:2 fasano:1 zuo:1 stability:1 searching:1 markovitch:2 notion:1 annals:2 hierarchy:4 suppose:1 nominal:1 exact:1 programming:1 anomaly:1 us:4 hypothesis:1 origin:3 crossing:2 mic:1 approximated:16 satisfying:4 observed:1 solved:4 wang:1 wj:1 region:4 news:3 contradicted:1 decrease:4 highest:1 overfitted:2 yijun:1 mentioned:1 complexity:4 trained:7 solving:1 efficiency:1 represented:1 various:2 kolmogorov:2 regularizer:2 train:3 sears:1 artificial:1 yufeng:1 outside:3 choosing:1 whose:1 widely:1 solve:3 gyemin:1 otherwise:2 ability:1 statistic:2 knn:1 advantage:7 sequence:5 product:1 frequent:2 uci:10 aligned:1 date:1 poorly:1 intuitive:1 olkopf:5 convergence:1 cluster:3 requirement:1 extending:1 regularity:2 optimum:1 perfect:1 converges:8 object:1 coupling:1 ac:2 measured:1 solves:1 dividing:1 coverage:15 c:2 implies:1 differ:1 direction:6 shaulm:1 drawback:2 correct:2 hull:1 sgn:2 require:1 generalization:2 extension:3 strictly:3 hold:1 lying:5 considered:1 ic:1 mapping:1 claim:1 substituting:1 major:1 smallest:4 omitted:1 yixin:1 purpose:1 estimation:9 bag:1 label:3 largest:6 correctness:1 svb:1 reflects:1 mit:1 gaussian:5 always:1 ck:2 rather:1 cr:4 broader:2 contrast:4 sense:3 shaul:1 kernelized:1 transformed:1 interested:3 dual:6 html:2 classification:2 polonik:4 constrained:1 spatial:1 equal:5 once:2 construct:2 future:3 others:2 few:1 preserve:4 simultaneously:2 neerlandica:1 replaced:3 lebesgue:2 statistician:1 negation:2 detection:6 evaluation:1 analyzed:1 light:1 primal:4 predefined:4 accurate:1 tree:1 taylor:1 initialized:1 theoretical:5 instance:2 classify:1 goodness:1 subset:5 technion:3 too:1 eec:1 sv:3 trackable:1 density:12 fundamental:1 csail:1 lee:2 michael:1 fcj:2 again:1 satisfied:1 american:1 derivative:1 chung:1 return:2 li:1 inc:1 satisfy:3 mv:48 depends:1 later:1 picked:1 franceschini:1 optimistic:1 doing:1 ichiro:1 portion:2 ocnm:1 parallel:7 il:2 takeuchi:2 hmve:18 largely:1 efficiently:5 yield:2 generalize:4 produced:1 reach:3 definition:3 competitor:3 against:1 frequency:1 involved:1 proof:3 associated:5 sampled:2 dataset:2 cscott:1 recall:1 astronomical:1 multivalued:1 hilbert:5 cj:7 actually:1 appears:3 higher:3 methodology:1 improved:3 formulation:4 done:1 evaluated:7 furthermore:5 roger:1 smola:3 ocsvms:10 steinwart:1 web:1 replacing:1 nonlinear:1 quality:1 reveal:1 ascribe:1 believe:3 grows:1 ye:1 contain:3 multiplier:2 regularization:2 hence:7 equality:1 iteratively:2 width:2 uniquely:2 inferior:2 covering:1 criterion:2 generalized:11 prominent:1 demonstrate:1 performs:2 novel:1 recently:1 common:1 tending:1 empirically:2 ji:3 exponentially:1 volume:4 discussed:1 he:1 interpretation:1 approximates:2 theirs:1 slight:2 association:1 significant:1 monthly:1 cambridge:1 imposing:1 munoz:1 rd:3 tuning:2 consistency:1 shawe:1 henry:1 dot:2 pq:1 jrennie:1 stable:1 longer:2 wilcoxon:2 multivariate:2 verlag:1 binary:2 arbitrarily:1 captured:1 minimum:5 seen:4 additional:2 preserving:1 employed:1 converge:2 signal:1 multiple:5 cross:1 lin:2 divided:1 variant:2 regression:4 basic:1 scalable:1 iteration:2 kernel:11 bimodal:1 c1:2 addition:1 background:1 separately:3 interval:2 sch:5 unlike:3 archive:1 induced:1 tend:1 svo:4 incorporates:1 spirit:1 effectiveness:1 call:3 extracting:1 practitioner:1 leverage:1 ideal:1 intermediate:1 enough:1 assafgr:2 variety:3 xj:4 newsgroups:2 fit:1 bandwidth:1 identified:2 competing:1 motivated:1 bartlett:1 suffer:1 returned:3 york:1 svs:4 useful:2 detailed:2 proportionally:1 transforms:1 nonparametric:2 extensively:1 svms:3 category:6 http:3 generate:1 notice:1 estimated:5 popularity:1 discrete:1 shall:2 nevertheless:1 drawn:3 hartigan:1 libsvm:2 verified:1 utilize:1 asymptotically:4 graph:3 icde:1 sum:1 inverse:1 package:1 family:2 chih:2 separation:1 decision:11 bound:2 guaranteed:3 display:2 quadratic:3 periments:1 constraint:10 alex:1 min:2 optimality:5 attempting:1 separable:5 relatively:2 department:1 according:2 unpredicted:1 smaller:1 slightly:1 son:1 contradicts:2 serfling:3 quoc:1 outlier:8 fulfilling:1 taken:1 equation:11 slack:1 count:1 koenker:1 umich:1 luenberger:1 generalizes:3 operation:1 available:2 apply:1 hierarchical:22 eatures:1 enforce:1 alternative:2 original:1 remaining:1 cf:1 include:1 clustering:1 quantile:22 especially:2 approximating:4 society:1 objective:5 concentration:1 distance:1 separating:3 argue:1 trivial:1 enforcing:2 minn:1 cq:2 ratio:16 vladimir:1 equivalently:1 setup:5 unfortunately:1 mostly:1 robert:2 statement:3 subproblems:1 negative:1 implementation:2 observation:1 datasets:6 ingo:1 supporting:1 defining:1 reproducing:4 arbitrary:1 introduced:4 clayton:1 required:2 specified:2 glazer:4 hush:1 nip:1 address:1 beyond:1 suggested:6 bar:1 parallelism:4 usually:1 scott:2 pattern:1 program:21 royal:1 ocsvm:79 power:1 shifting:1 natural:1 rely:1 zhu:1 technology:1 library:1 fc1:2 fcq:2 carried:1 naive:3 text:2 review:1 geometric:1 expect:1 interesting:3 limitation:1 proven:2 versus:1 localized:1 validation:1 sufficient:3 article:1 share:2 austin:1 summary:1 last:1 free:1 bias:2 side:1 understand:2 allow:1 institute:1 neighbor:2 absolute:1 benefit:2 regard:2 boundary:4 dimension:6 xn:3 valid:1 cumulative:1 stand:1 numeric:1 qocsvm:1 forward:1 author:2 depth:2 transaction:2 approximate:5 ignore:1 bernhard:1 ml:1 global:10 summing:1 corpus:6 xi:15 don:1 continuous:1 search:1 svbj:4 nature:4 ignoring:1 williamson:2 necessarily:2 complex:1 domain:3 significance:2 dense:3 hierarchically:1 statistica:1 bounding:2 alarm:1 edition:2 x1:3 referred:1 quantiles:25 wiley:1 exponential:2 lie:1 jmlr:2 theorem:4 specific:1 jen:1 showing:3 x:6 svm:17 dominates:2 derives:2 exists:3 false:1 vapnik:1 ci:1 margin:1 chen:1 generalizing:2 intersection:2 fc:4 timothy:1 univariate:1 likely:2 hodge:1 chang:1 springer:2 nested:5 cdf:2 sized:1 goal:2 marked:1 consequently:2 sorted:1 feasible:3 hard:3 change:2 infinite:1 hyperplane:5 called:3 total:1 xin:1 newsgroup:5 highdimensional:3 hypothetically:1 support:6 people:1 alexander:1 ongoing:1 evaluate:1 tested:1 |
4,560 | 5,126 | Unsupervised Structure Learning of Stochastic
And-Or Grammars
Kewei Tu
Maria Pavlovskaia
Song-Chun Zhu
Center for Vision, Cognition, Learning and Art
Departments of Statistics and Computer Science
University of California, Los Angeles
{tukw,mariapavl,sczhu}@ucla.edu
Abstract
Stochastic And-Or grammars compactly represent both compositionality and reconfigurability and have been used to model different types of data such as images
and events. We present a unified formalization of stochastic And-Or grammars
that is agnostic to the type of the data being modeled, and propose an unsupervised
approach to learning the structures as well as the parameters of such grammars.
Starting from a trivial initial grammar, our approach iteratively induces compositions and reconfigurations in a unified manner and optimizes the posterior probability of the grammar. In our empirical evaluation, we applied our approach to
learning event grammars and image grammars and achieved comparable or better
performance than previous approaches.
1
Introduction
Stochastic grammars are traditionally used to represent natural language syntax and semantics, but
they have also been extended to model other types of data like images [1, 2, 3] and events [4, 5,
6, 7]. It has been shown that stochastic grammars are powerful models of patterns that combine
compositionality (i.e., a pattern can be decomposed into a certain configuration of sub-patterns) and
reconfigurability (i.e., a pattern may have multiple alternative configurations). Stochastic grammars
can be used to parse data samples into their compositional structures, which help solve tasks like
classification, annotation and segmentation in a unified way. We study stochastic grammars in the
form of stochastic And-Or grammars [1], which are an extension of stochastic grammars in natural
language processing [8, 9] and are closely related to sum-product networks [10]. Stochastic And-Or
grammars have been used to model spatial structures of objects and scenes [1, 3] as well as temporal
structures of actions and events [7].
Manual specification of a stochastic grammar is typically very difficult and therefore machine learning approaches are often employed to automatically induce unknown stochastic grammars from data.
In this paper we study unsupervised learning of stochastic And-Or grammars in which the training
data are unannotated (e.g., images or action sequences).
The learning of a stochastic grammar involves two parts: learning the grammar rules (i.e., the structure of the grammar) and learning the rule probabilities or energy terms (i.e., the parameters of the
grammar). One strategy in unsupervised learning of stochastic grammars is to manually specify
a fixed grammar structure (in most cases, the full set of valid grammar rules) and try to optimize
the parameters of the grammar. Many approaches of learning natural language grammars (e.g.,
[11, 12]) as well as some approaches of learning image grammars [10, 13] adopt this strategy. The
main problem of this strategy is that in some scenarios the full set of valid grammar rules is too large
for practical learning and inference, while manual specification of a compact grammar structure is
challenging. For example, in an image grammar the number of possible grammar rules to decompose an image patch is exponential in the size of the patch; previous approaches restrict the valid
1
ways of decomposing an image patch (e.g., allowing only horizontal and vertical segmentations),
which however reduces the expressive power of the image grammar.
In this paper, we propose an approach to learning both the structure and the parameters of a stochastic And-Or grammar. Our approach extends the previous work on structure learning of natural
language grammars [14, 15, 16], while improves upon the recent work on structure learning of AndOr grammars of images [17] and events [18]. Starting from a trivial initial grammar, our approach
iteratively inserts new fragments into the grammar to optimize its posterior probability. Most of
the previous structure learning approaches learn new compositions and reconfigurations modeled
in the grammar in a separate manner, which can be error-prone when the training data is scarce or
ambiguous; in contrast, we induce And-Or fragments of the grammar, which unifies the search for
new compositions and reconfigurations, making our approach more efficient and robust.
Our main contributions are as follows.
? We present a formalization of stochastic And-Or grammars that is agnostic to the types of
atomic patterns and their compositions. Consequently, our learning approach is capable of
learning from different types of data, e.g., text, images, events.
? Unlike some previous approaches that rely on heuristics for structure learning, we explicitly
optimize the posterior probability of both the structure and the parameters of the grammar.
The optimization procedure is made efficient by deriving and utilizing a set of sufficient
statistics from the training data.
? We learn compositions and reconfigurations modeled in the grammar in a unified manner
that is more efficient and robust to data scarcity and ambiguity than previous approaches.
? We empirically evaluated our approach in learning event grammars and image grammars
and it achieved comparable or better performance than previous approaches.
2
Stochastic And-Or Grammars
Stochastic And-Or grammars are first proposed to model images [1] and later adapted to model
events [7]. Here we provide a unified definition of stochastic And-Or grammars that is agnostic to
the type of the data being modeled. We restrict ourselves to the context-free subclass of stochastic
And-Or grammars, which can be seen as an extension of stochastic context-free grammars in formal language theory [8] as well as an extension of decomposable sum-product networks [10]. A
stochastic context-free And-Or grammar is defined as a 5-tuple h?, N, S, R, P i. ? is a set of terminal nodes representing atomic patterns that are not decomposable; N is a set of nonterminal nodes
representing decomposable patterns, which is divided into two disjoint sets: And-nodes N AND and
Or-nodes N OR ; S ? N is a start symbol that represents a complete entity; R is a set of grammar
rules, each of which represents the generation from a nonterminal node to a set of nonterminal or
terminal nodes; P is the set of probabilities assigned to the grammar rules. The set of grammar rules
R is divided into two disjoint sets: And-rules and Or-rules.
? An And-rule represents the decomposition of a pattern into a configuration of nonoverlapping sub-patterns. It takes the form of A ? a1 a2 . . . an , where A ? N AND is a
nonterminal And-node and a1 a2 . . . an is a set of terminal or nonterminal nodes representing the sub-patterns. A set of relations are specified between the sub-patterns and between
the nonterminal node A and the sub-patterns, which configure how these sub-patterns form
the composite pattern represented by A. The probability of an And-rule is specified by the
energy terms defined on the relations. Note that one can specify different types of relations
in different And-rules, which allows multiple types of compositions to be modeled in the
same grammar.
? An Or-rule represents an alternative configuration of a composite pattern. It takes the form
of O ? a, where O ? N OR is a nonterminal Or-node, and a is either a terminal or a
nonterminal node representing a possible configuration. The set of Or-rules with the same
left-hand side can be written as O ? a1 |a2 | . . . |an . The probability of an Or-rule specifies
how likely the alternative configuration represented by the Or-rule is selected.
A stochastic And-Or grammar defines generative processes of valid entities, i.e., starting from an
entity containing only the start symbol S and recursively applying the grammar rules in R to convert
2
Table 1: Examples of stochastic And-Or grammars
Terminal node
Word
Natural language
grammar
Event And-Or
grammar [7]
Image And-Or
grammar [1]
Nonterminal
node
Or?node
Or?node
Phrase
Atomic action (e.g.,
standing, drinking)
A1 A1
Visual word (e.g.,
Gabor bases)
a1 a12 a23
x1
And?node
A1
S
A1
a3
x1
a4
a1
a2
a5
a3
A1
a3
a4
a5
??
A2
a4
a6
x2
a3
S
a4
A2
A2
And?node
A2S
a2
a2
S
Or?node
A1
x2
x1A1
a1
Or?node
And?node
a1
Relations in And-rules
Deterministic ?concatenating?
relations
S S
Event or sub-event
Temporal relations (e.g., those
proposed in [19])
And?nodeA2 A2 Or?node ?? ??
Image patch
Spatial relations (e.g., those
specifying relative positions,
a34 a4a5 a53 Sa34 a46 a6
rotations and scales)
And?node
And?node
??
a6
a7
a8
a1
a12
N1 N1
a3
a5
??
a6
a7
a8
S
A1
?? ??
Or?node
a1
A2
1
a4
X
a5
A2
aa65
??
a7
N
A1
??
Xa8 a6
1
a1
a23
S
(a)S
A2
A1
A1
??
A2
??
illustration
a
X
X a of
A1
A1
a1
A1
S
a23
A
a4
A2
S
(b)
Y a6
a2
X
Ya
1
??
A1
A1
??
?? ??
S
N2 a6N2 a6
Xa2
S
?? A
2
A2
Or?node
a 1 N2 N2
?? ??
S
A2
A1
??
N2A N2
N1
2
??
a1
Y
Y a6
a3a4 a4 a a5 a6 a7 N
a81 N
??
1
3
4
X
S
S
A2
A1
And?node
S
a5S N1 a6N1 a6
a2 N1 Na15
AA1
a
a1 1 a2 a2a3
a34 a4
x2A2
S
A2
aa11 a Y
2
a25
a5 Y
a(c)
a4
3
??
A2
Y a6
a5
X a6
S
X
??
a
2
A2
A1
a5
a1
Figure 1: An
the learning process. (a)a The
a
X??ainitial grammar. (b)
X Iteration 1:??learning a
a1
YY 2: learning
Y a6
Y rooted at N2 .
X
X
grammar
at N1 . (c)XIteration
a grammar
fragment
a1 a2
a5
a6
Xfragment
Xrooted
X
X
X
??
X
a1
2
a3
5
6
a4
a3
X a4
a3
a2
a4
a5
5
6
YX
X
a3 a4
a3 a4
a2
a3
3.1
Unsupervised Structure Learning
Problem Definition
In unsupervised learning of stochastic And-Or grammars, we aim to learn a grammar from a set
of unannotated i.i.d. data samples (e.g., natural language sentences, quantized images, action sequences). The objective function is the posterior probability of the grammar given the training data:
Y
1
P (xi |G)
P (G|X) ? P (G)P (X|G) = e??kGk
Z
xi ?X
where G is the grammar, X = {xi } is the set of training samples, Z is the normalization factor
of the prior, ? is a constant, and kGk is the size of the grammar. By adopting a sparsity prior that
penalizes the size of the grammar, we hope to learn a compact grammar with good generalizability.
In order to ease the learning process, during learning we approximate the likelihood P (xi |G) with
the Viterbi likelihood (the probability of the best parse of the data sample xi ). Viterbi likelihood has
been empirically shown to lead to better grammar learning results [20, 10] and can be interpreted as
combining the standard likelihood with an unambiguity bias [21].
3.2
Algorithm Framework
We first define an initial grammar that generates the exact set of training samples. Specifically, for
each training sample xi ? X, there is an Or-rule S ? Ai in the initial grammar where S is the start
1
symbol and Ai is an And-node, and the probability of the rule is kXk
where kXk is the number of
training samples; for each xi there is also an And-rule Ai ? ai1 ai2 . . . ain where aij (j = 1 . . . n)
are the terminal nodes representing the set of atomic patterns contained in sample xi , and a set of
relations are specified between these terminal nodes such that they compose sample xi . Figure 1(a)
shows an example initial grammar. This initial grammar leads to the maximal likelihood on the
training data but has a very small prior probability because of its large size.
3
Y
X
a4
a5
X
a3 a4 only terminal
nonterminal
nodes until the entity contains
nodes (atomic patterns). Table 1 gives a
a3 a4
few examples of stochastic context-free And-Or
that model different types of data.
a2 grammars
a5
3
??
Y a6
Y
a2
a5
??
Starting from the initial grammar, we introduce new intermediate nonterminal nodes between the
terminal nodes and the top-level nonterminal nodes in an iterative bottom-up fashion to generalize
the grammar and increase its posterior probability. At each iteration, we add a grammar fragment
into the grammar that is rooted at a new nonterminal node and contains a set of grammar rules that
specify how the new nonterminal node generates one or more configurations of existing terminal
or nonterminal nodes; we also try to reduce each training sample using the new grammar rules and
update the top-level And-rules accordingly. Figure 1 illustrates this learning process. There are
typically multiple candidate grammar fragments that can be added at each iteration, and we employ
greedy search or beam search to explore the search space and maximize the posterior probability of
the grammar. We also restrict the types of grammar fragments that can be added in order to reduce
the number of candidate grammar fragments, which will be discussed in the next subsection. The
algorithm terminates when no more grammar fragment can be found that increases the posterior
probability of the grammar.
3.3
And-Or Fragments
In each iteration of our learning algorithm framework, we search for a new grammar fragment and
add it into the grammar. There are many different types of grammar fragments, the choice of which
greatly influences the efficiency and accuracy of the learning algorithm. Two simplest types of
grammar fragments are And-fragments and Or-fragments. An And-fragment contains a new Andnode A and an And-rule A ? a1 a2 . . . an specifying the generation from the And-node A to a
configuration of existing nodes a1 a2 . . . an . An Or-fragment contains a new Or-node O and a set
of Or-rules O ? a1 |a2 | . . . |an each specifying the generation from the Or-node O to an existing
node ai . While these two types of fragments are simple and intuitive, they both have important
disadvantages if they are searched for separately in the learning algorithm. For And-fragments, when
the training data is scarce, many compositions modeled by the target grammar would be missing
from the training data and hence cannot be learned by searching for And-fragments alone; besides,
if the search for And-fragments is not properly coupled with the search for Or-fragments, the learned
grammar would become large and redundant. For Or-fragments, it can be shown that in most cases
adding an Or-fragment into the grammar decreases the posterior probability of the grammar even
if the target grammar does contain the Or-fragment, so in order to learn Or-rules we need more
expensive search techniques than greedy or beam search employed in our algorithm; in addition, the
search for Or-fragments can be error-prone if different Or-rules can generate the same node in the
target grammar.
Instead of And-fragments and Or-fragments, we propose to search for And-Or fragments in the
learning algorithm. An And-Or fragment contains a new And-node A, a set of new Or-nodes
O1 , O2 , . . . , On , an And-rule A ? O1 O2 . . . On , and a set of Or-rules Oi ? ai1 |ai2 | . . . |aimi
for each Or-node Oi (where
Qn ai1 , ai2 , . . . , aimi are existing nodes of the grammar). Such an And-Or
fragment can generate i=1 mi number of configurations of existing nodes. Figure 2(a) shows an
example And-Or fragment. It can be shown that by adding only And-Or fragments, our algorithm is
still capable of constructing any context-free And-Or grammar. Using And-Or fragments can avoid
or alleviate the problems associated with And-fragments and Or-fragments: since an And-Or fragment systematically covers multiple compositions, the data scarcity problem of And-fragments is
alleviated; since And-rules and Or-rules are learned in a more unified manner, the resulting grammar is often more compact; reasonable And-Or fragments usually increase the posterior probability
of the grammar, therefore easing the search procedure; finally, ambiguous Or-rules can be better
distinguished since they are learned jointly with their sibling Or-nodes in the And-Or fragments.
To perform greedy search or beam search, in each iteration of our learning algorithm we need to
find the And-Or fragments that lead to the highest gain in the posterior probability of the grammar.
Computing the posterior gain by re-parsing the training samples can be very time-consuming if the
training set or the grammar is large. Fortunately, we show that by assuming grammar unambiguity
the posterior gain of adding an And-Or fragment can be formulated based on a set of sufficient statistics of the training data and is efficient to compute. Since the posterior probability is proportional to
the product of the likelihood and the prior probability, the posterior gain is equal to the product of
the likelihood gain and the prior gain, which we formulate separately below.
Likelihood Gain. Remember that in our learning algorithm when an And-Or fragment is added
into the grammar, we try to reduce the training samples using the new grammar rules and update the
4
O1
O2
a11 a12 a13
O3
a21 a22
9
a11
a31 a32 a33 a34
12
3
30 10
a12 15 620 85
23 12
a13 17 23
6
3
a31 a32 a33 a34
A
context1 context2 context3
O1
O2
a11 a12 a13
O1
a21 a22
a11 a12 a13
O3
A
O2
9
a31 a32 a33 a34
O3
a21 a22
a31 a32 a33 a34
a11
3
12
94 121
3
a11a21a31
1
0
0
?
30 10
a12a21a31
5
1
2
?
2???
30 10
a13a22a34
23 12
?
?
?
?
4
1
1
?
a12 15 620 85 23 12
9 12
a13 17 a23 6 93 12
3 4 1
11
a31 a32 a33 a34
a12 15 620 85
(a)
???
2
a13(b)17 23
6
3
3
(c)
a31 a32 a33 a34
context
??? fragment. (b) The n-gram tensor of the And-Or fragment based
2 contextAnd-Or
3
Figurecontext
2: (a)1 An
example
a11on
a21athe
1
0
0 n =? 3). (c) The context matrix of the And-Or fragment based on the
31
training
data
(here
a12training
a21a31
5
1
2 context?
data.
context
context3
???
1
2
a11a?21a31
? 1
a12a21a31
5
a13top-level
a22a34
4And-rules
1
???
0 ?
1
1
?
accordingly.
?
0
?
2
?
Denote the set of reductions being made on the training samples
? reduction
? rd ? ?
?
by RD. ???
Suppose in
RD, we replace
a configuration e of nodes a1j1 a2j2 . . . anjn
with the
And-node
A,
where
a
(i
=
1
.
.
.
n)
is
an
existing terminal or nonterminal node that
ij
i
a13anew
a
4
1
1
?
22 34
can be generated by the new Or-node Oi in the And-Or fragment. With reduction rd, the Viterbi
likelihood of the training sample x where rd occurs is changed by two factors. First, since the
grammar now generates the And-node A first, which then generates a1j1 a2j2 . . . anjn , the Viterbi
likelihood of sample x is reduced by a factor of P (A ? a1j1 a2j2 . . . anjn ). Second, the reduction
may make sample x identical to some other training samples, which increases the Viterbi likelihood
of sample x by a factor equal to the ratio of the numbers of such identical samples after and before
the reduction. To facilitate the computation of this factor, we can construct a context matrix CM
where each row is a configuration of existing nodes covered by the And-Or fragment, each column
is a context which is the surrounding patterns of a configuration, and each element is the number of
times that the corresponding configuration and context co-occur in the training set. See Figure 2(c)
for the context matrix of the example And-Or fragment. Putting these two types of changes to the
likelihood together, we can formulate the likelihood gain of adding the And-Or fragment as follows
(see the supplementary material for the full derivation).
P
Qn Qmi
Q P
kRDi (aij )k
CM [e,c]
e
P (X|Gt+1 )
i=1
j=1 kRDi (aij )k
c (Q e CM [e, c])
=
?
nkRDk
CM
[e,c]
P (X|Gt )
kRDk
e,c CM [e, c]
where Gt and Gt+1 are the grammars before and after learning from the And-Or fragment, RDi (aij )
denotes the subset of reductions in RD in which the i-th node of the configuration being reduced
is aij , e in the summation or product ranges over all the configurations covered by the And-Or
fragment, and c in the product ranges over all the contexts that appear in CM .
It can be shown that the likelihood gain can be factorized as the product of two tensor/matrix coherence measures as defined in [22]. The first is the coherence of the n-gram tensor of the And-Or
fragment (which tabulates the number of times each configuration covered by the And-Or fragment
appears in the training samples, as illustrated in Figure 2(b)). The second is the coherence of the
context matrix. These two factors provide a surrogate measure of how much the training data support
the context-freeness within the And-Or fragment and the context-freeness of the And-Or fragment
against its context respectively. See the supplementary material for the derivation and discussion.
The formulation of likelihood gain also entails the optimal probabilities of the Or-rules in the AndOr fragment.
kRDi (aij )k
kRDi (aij )k
=
?i, j P (Oi ? aij ) = Pmi
0
kRDk
kRD
(a
)k
0
i ij
j =1
Prior Gain. The prior probability of the grammar is determined by the grammar size. When the
And-Or fragment is added into the grammar, the size of the grammar is changed in two aspects:
first, the size of the grammar is increased by the size of the And-Or fragment; second, the size of the
grammar is decreased because of the reductions from configurations of multiple nodes to the new
And-node. Therefore, the prior gain of learning from the And-Or fragment is:
Pn
P (Gt+1 )
= e??(kGt+1 k?kGt k) = e??((nsa + i=1 mi so )?kRDk(n?1)sa )
P (Gt )
5
2
94 121
3
r1
Relation
r3
r2
2 1
r1
2 3
1
(a)
1 2 1
5 3
3 2
+1
2 5 3
6 3
Training?samples
A
(b)
n?gram?tensors?of?different?
relations?(here?n
n=2)
O1
r1
O2
And?Or?fragment
Figure 3: An illustration of the procedure of finding the best And-Or fragment. r1 , r2 , r3 denote
different relations between patterns. (a) Collecting statistics from the training samples to construct
or update the n-gram tensors. (b) Finding one or more sub-tensors that lead to the highest posterior
gain and constructing the corresponding And-Or fragments.
Figure 4: An example video and the action annotations from the human activity dataset [23]. Each
colored bar denotes the start/end time of an occurrence of an action.
where sa and so are the number of bits needed to encode each node on the right-hand side of an
And-rule and Or-rule respectively. It can be seen that the prior gain penalizes And-Or fragments
that have a large size but only cover a small number of configurations in the training data.
In order to find the And-Or fragments with the highest posterior gain, we could construct n-gram
tensors from all the training samples for different values of n and different And-rule relations, and
within these n-gram tensors we search for sub-tensors that correspond to And-Or fragments with
the highest posterior gain. Figure 3 illustrates this procedure. In practice, we find it sufficient to
use greedy search or beam search with random restarts in identifying good And-Or fragments. See
the supplementary material for the pseudocode of the complete algorithm of grammar learning.
The algorithm runs reasonably fast: our prototype implementation can finish running within a few
minutes on a desktop with 5000 training samples each containing more than 10 atomic patterns.
4
4.1
Experiments
Learning Event Grammars
We applied our approach to learn event grammars from human activity data. The first dataset contains 61 videos of indoor activities, e.g., using a computer and making a phone call [23]. The atomic
actions and their start/end time are annotated in each video, as shown in Figure 4. Based on this
dataset, we also synthesized a more complicated second dataset by dividing each of the two most
frequent actions, sitting and standing, into three subtypes and assigning each occurrence of the two
actions randomly to one of the subtypes. This simulates the scenarios in which the actions are detected in an unsupervised way and therefore actions of the same type may be regarded as different
because of the difference in the posture or viewpoint.
We employed three different methods to apply our grammar learning approach on these two datasets.
The first method is similar to that proposed in [18]. For each frame of a video in the dataset, we
construct a binary vector that indicates which of the atomic actions are observed in this frame. In this
way, each video is represented by a sequence of vectors. Consecutive vectors that are identical are
6
Pick?&?throw?trash
Stand
f
f
c
Bend?
down
Squat
Stand
Pick?up?
trash
Stand
f
f
c
Stand
Throw?
trash
Bend?
down
f
The??followed?
The
followed
by??relation
c
The??co
The
co?occurring
occurring??
relation
Figure 5: An example event And-Or grammar with two
types of relations that grounds to atomic actions
Table 2: The experimental results (Fmeasure) on the event datasets. For
our approach, f, c+f and cf denote
the first, second and third methods
respectively.
ADIOS [15]
SPYZ [18]
Ours (f)
Ours (c+f)
Ours (cf)
Data 1
0.810
0.756
0.831
0.768
0.767
Data 2
0.204
0.582
0.702
0.624
0.813
merged. We then map each distinct vector to a unique ID and thus convert each video into a sequence
of IDs. Our learning approach is applied on the ID sequences, where each terminal node represents
an ID and each And-node specifies the temporal ?followed-by? relation between its child nodes. In
the second and third methods, instead of the ID sequences, our learning approach is directly applied
to the vector sequences. Each terminal node now represents an occurrence of an atomic action. In
addition to the ?followed-by? relation, an And-node may also specify the ?co-occurring? relation
between its child nodes. In this way, the resulting And-Or grammar is directly grounded to the
observed atomic actions and is therefore more flexible and expressive than the grammar learned
from IDs as in the first method. Figure 5 shows such a grammar. The difference between the second
and the third method is: in the second method we require the And-nodes with the ?co-occurring?
relation to be learned before any And-node with the ?followed-by? relation is learned, which is
equivalent to applying the first method based on a set of IDs that are also learned; on the other hand,
the third method does not restrict the order of learning of the two types of And-nodes.
Note that in our learning algorithm we assume that each training sample consists of a single pattern
generated from the target grammar, but here each video may contain multiple unrelated events. We
slightly modified our algorithm to accommodate this issue: right before the algorithm terminates, we
change the top-level And-nodes in the grammar to Or-nodes, which removes any temporal relation
between the learned events in each training sample and renders them independent of each other.
When parsing a new sample using the learned grammar, we employ the CYK algorithm to efficiently
identify all the subsequences that can be parsed as an event by the grammar.
We used 55 samples of each dataset as the training set and evaluated the learned grammars on the
remaining 6 samples. On each testing sample, the events identified by the learned grammars were
compared against manual annotations. We measured the purity (the percentage of the identified
event durations overlapping with the annotated event durations) and inverse purity (the percentage
of the annotated event durations overlapping with the identified event durations), and report the Fmeasure (the harmonic mean of purity and inverse purity). We compared our approach with two
previous approaches [15, 18], both of which can only learn from ID sequences.
Table 2 shows the experimental results. It can be seen that our approach is competitive with the
previous approaches on the first dataset and outperforms the previous approaches on the more complicated second dataset. Among the three methods of applying our approach, the second method has
the worst performance, mostly because the restriction of learning the ?co-occurring? relation first
often leads to premature equating of different vectors. The third method leads to the best overall
performance, which implies the advantage of grounding the grammar to atomic actions and simultaneously learning different relations. Note that the third method has better performance on the more
complicated second dataset, and our analysis suggests that the division of sitting/standing into subtypes in the second dataset actually helps the third method to avoid learning erroneous compositions
of continuous siting or standing.
4.2
Learning Image Grammars
We first tested our approach in learning image grammars from a synthetic dataset of animal face
sketches [24]. Figure 6 shows some example images from the dataset. We constructed 15 training
sets of 5 different sizes and ran our approach for three times on each training set. We set the terminal
7
1
0.4
Ours
SZ [17]
0.2
Figure 6: Example
images from the synthetic dataset
0
100
200
300
400
Number of Training Samples
KL?Divergence
F?measure
0.6
0
Ours
SZ [17]
15
0.8
10
5
0
0
100
200
300
Number of Training Samples
400
Figure 7: The experimental results on the synthetic image dataset
Example?
quantized?
images
Example?
images
Table 3: The average
perplexity on the testing
sets from the real image experiments (lower
is better)
Atomic?patterns
Atomic
patterns
(terminal?nodes)
Figure 8: Example images and atomic patterns of the real dataset [17]
Ours
SZ [17]
Perplexity
67.5
129.4
nodes to represent the atomic sketches in the images and set the relations in And-rules to represent
relative positions between image patches. The hyperparameter ? of our approach is fixed to 0.5.
We evaluated the learned grammars against the true grammar. We estimated the precision and recall
of the sets of images generated from the learned grammars versus the true grammar, from which
we computed the F-measure. We also estimated the KL-divergence of the probability distributions
defined by the learned grammars from that of the true grammar. We compared our approach with
the image grammar learning approach proposed in [17]. Figure 7 shows the experimental results. It
can be seen that our approach significantly outperforms the competing approach.
We then ran our approach on a real dataset of animal faces that was used in [17]. The dataset contains
320 images of four categories of animals: bear, cat, cow and wolf. We followed the method described
in [17] to quantize the images and learn the atomic patterns, which become the terminal nodes of the
grammar. Figure 8 shows some images from the dataset, the quantization examples and the atomic
patterns learned. We again used the relative positions between image patches as the type of relations
in And-rules. Since the true grammar is unknown, we evaluated the learned grammars by measuring
their perplexity (the reciprocal of the geometric mean probability of a sample from a testing set).
We ran 10-fold cross-validation on the dataset: learning an image grammar from each training set
and then evaluating its perplexity on the testing set. Before estimating the perplexity, the probability
distribution represented by each learned grammar was smoothed to avoid zero probability on the
testing images. Table 3 shows the results of our approach and the approach from [17]. Once again
our approach significantly outperforms the competing approach.
5
Conclusion
We have presented a unified formalization of stochastic And-Or grammars that is agnostic to the type
of the data being modeled, and have proposed an unsupervised approach to learning the structures
as well as the parameters of such grammars. Our approach optimizes the posterior probability of the
grammar and induces compositions and reconfigurations in a unified manner. Our experiments in
learning event grammars and image grammars show satisfactory performance of our approach.
Acknowledgments
The work is supported by grants from DARPA MSEE project FA 8650-11-1-7149, ONR MURI
N00014-10-1-0933, NSF CNS 1028381, and NSF IIS 1018751.
8
References
[1] S.-C. Zhu and D. Mumford, ?A stochastic grammar of images,? Found. Trends. Comput. Graph. Vis.,
vol. 2, no. 4, pp. 259?362, 2006.
[2] Y. Jin and S. Geman, ?Context and hierarchy in a probabilistic image model,? in CVPR, 2006.
[3] Y. Zhao and S. C. Zhu, ?Image parsing with stochastic scene grammar,? in NIPS, 2011.
[4] Y. A. Ivanov and A. F. Bobick, ?Recognition of visual activities and interactions by stochastic parsing,?
Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 852?872, 2000.
[5] M. S. Ryoo and J. K. Aggarwal, ?Recognition of composite human activities through context-free grammar based representation,? in CVPR, 2006.
[6] Z. Zhang, T. Tan, and K. Huang, ?An extended grammar system for learning and recognizing complex
visual events,? IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 2, pp. 240?255, Feb. 2011.
[7] M. Pei, Y. Jia, and S.-C. Zhu, ?Parsing video events with goal inference and intent prediction,? in ICCV,
2011.
[8] C. D. Manning and H. Sch?utze, Foundations of statistical natural language processing.
MA, USA: MIT Press, 1999.
Cambridge,
[9] P. Liang, M. I. Jordan, and D. Klein, ?Probabilistic grammars and hierarchical dirichlet processes,? The
handbook of applied Bayesian analysis, 2009.
[10] H. Poon and P. Domingos, ?Sum-product networks : A new deep architecture,? in Proceedings of the
Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (UAI), 2011.
[11] J. K. Baker, ?Trainable grammars for speech recognition,? in Speech Communication Papers for the 97th
Meeting of the Acoustical Society of America, 1979.
[12] D. Klein and C. D. Manning, ?Corpus-based induction of syntactic structure: Models of dependency and
constituency,? in Proceedings of ACL, 2004.
[13] S. Wang, Y. Wang, and S.-C. Zhu, ?Hierarchical space tiling for scene modeling,? in Computer Vision?
ACCV 2012. Springer, 2013, pp. 796?810.
[14] A. Stolcke and S. M. Omohundro, ?Inducing probabilistic grammars by Bayesian model merging,? in
ICGI, 1994, pp. 106?118.
[15] Z. Solan, D. Horn, E. Ruppin, and S. Edelman, ?Unsupervised learning of natural languages,? Proc. Natl.
Acad. Sci., vol. 102, no. 33, pp. 11 629?11 634, August 2005.
[16] K. Tu and V. Honavar, ?Unsupervised learning of probabilistic context-free grammar using iterative biclustering,? in Proceedings of 9th International Colloquium on Grammatical Inference (ICGI 2008), ser.
LNCS 5278, 2008.
[17] Z. Si and S. Zhu, ?Learning and-or templates for object modeling and recognition,? IEEE Trans on Pattern
Analysis and Machine Intelligence, 2013.
[18] Z. Si, M. Pei, B. Yao, and S.-C. Zhu, ?Unsupervised learning of event and-or grammar and semantics
from video,? in ICCV, 2011.
[19] J. F. Allen, ?Towards a general theory of action and time,? Artificial intelligence, vol. 23, no. 2, pp.
123?154, 1984.
[20] V. I. Spitkovsky, H. Alshawi, D. Jurafsky, and C. D. Manning, ?Viterbi training improves unsupervised
dependency parsing,? in Proceedings of the Fourteenth Conference on Computational Natural Language
Learning, ser. CoNLL ?10, 2010.
[21] K. Tu and V. Honavar, ?Unambiguity regularization for unsupervised learning of probabilistic grammars,?
in Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL 2012), 2012.
[22] S. C. Madeira and A. L. Oliveira, ?Biclustering algorithms for biological data analysis: A survey.?
IEEE/ACM Trans. on Comp. Biol. and Bioinformatics, vol. 1, no. 1, pp. 24?45, 2004.
[23] P. Wei, N. Zheng, Y. Zhao, and S.-C. Zhu, ?Concurrent action detection with structural prediction,? in
Proc. Intl Conference on Computer Vision (ICCV), 2013.
[24] A. Barbu, M. Pavlovskaia, and S. Zhu, ?Rates for inductive learning of compositional models,? in AAAI
Workshop on Learning Rich Representations from Low-Level Sensors (RepLearning), 2013.
9
| 5126 |@word kgk:2 solan:1 decomposition:1 pick:2 accommodate:1 recursively:1 reduction:7 initial:7 configuration:18 contains:7 fragment:66 ours:6 o2:6 existing:7 outperforms:3 si:2 assigning:1 written:1 parsing:6 remove:1 update:3 alone:1 generative:1 selected:1 greedy:4 intelligence:4 accordingly:2 desktop:1 reciprocal:1 colored:1 quantized:2 node:73 zhang:1 constructed:1 become:2 edelman:1 consists:1 combine:1 compose:1 introduce:1 manner:5 terminal:16 decomposed:1 automatically:1 ivanov:1 project:1 estimating:1 unrelated:1 baker:1 agnostic:4 factorized:1 cm:6 interpreted:1 msee:1 unified:8 finding:2 temporal:4 remember:1 collecting:1 subclass:1 ser:2 grant:1 appear:1 before:5 acad:1 mach:1 id:8 easing:1 acl:1 equating:1 specifying:3 challenging:1 suggests:1 co:6 jurafsky:1 ease:1 range:2 horn:1 acknowledgment:1 practical:1 unique:1 testing:5 atomic:18 practice:1 procedure:4 lncs:1 empirical:2 gabor:1 composite:3 alleviated:1 significantly:2 word:2 induce:2 cannot:1 bend:2 context:21 applying:3 influence:1 optimize:3 equivalent:1 deterministic:1 a33:6 center:1 missing:1 map:1 restriction:1 starting:4 duration:4 survey:1 formulate:2 decomposable:3 identifying:1 rule:42 utilizing:1 regarded:1 deriving:1 searching:1 traditionally:1 target:4 a13:6 suppose:1 hierarchy:1 exact:1 tan:1 barbu:1 domingo:1 element:1 trend:1 expensive:1 recognition:4 muri:1 geman:1 bottom:1 observed:2 wang:2 worst:1 squat:1 decrease:1 highest:4 ran:3 a32:6 colloquium:1 upon:1 division:1 efficiency:1 compactly:1 darpa:1 represented:4 cat:1 america:1 surrounding:1 derivation:2 distinct:1 fast:1 detected:1 artificial:2 heuristic:1 supplementary:3 solve:1 cvpr:2 tested:1 grammar:160 statistic:4 n2a:1 jointly:1 syntactic:1 sequence:8 advantage:1 propose:3 interaction:1 product:8 maximal:1 frequent:1 tu:3 combining:1 bobick:1 poon:1 intuitive:1 inducing:1 los:1 r1:4 intl:1 a11:5 object:2 help:2 madeira:1 measured:1 nonterminal:16 ij:2 a22:3 sa:2 dividing:1 throw:2 involves:1 implies:1 closely:1 kgt:2 annotated:3 merged:1 stochastic:31 human:3 a12:8 material:3 require:1 trash:3 decompose:1 alleviate:1 biological:1 summation:1 subtypes:3 extension:3 insert:1 drinking:1 ground:1 cognition:1 viterbi:6 a31:7 adopt:1 a2:32 consecutive:1 utze:1 proc:2 ain:1 concurrent:1 hope:1 mit:1 sensor:1 aim:1 modified:1 avoid:3 pn:1 encode:1 maria:1 properly:1 likelihood:15 indicates:1 greatly:1 contrast:1 reconfigurations:5 andor:2 inference:3 typically:2 relation:25 icgi:2 semantics:2 issue:1 classification:1 flexible:1 among:1 overall:1 animal:3 art:1 spatial:2 equal:2 construct:4 once:1 manually:1 identical:3 represents:6 unsupervised:13 report:1 few:2 employ:2 randomly:1 simultaneously:1 divergence:2 intell:1 ourselves:1 cns:1 n1:6 detection:1 a5:13 zheng:1 evaluation:1 ai1:3 nsa:1 configure:1 fmeasure:2 natl:1 tuple:1 capable:2 penalizes:2 re:1 increased:1 column:1 modeling:2 cover:2 disadvantage:1 measuring:1 a6:15 phrase:1 rdi:1 subset:1 recognizing:1 seventh:1 too:1 dependency:2 generalizability:1 synthetic:3 international:1 standing:4 probabilistic:5 together:1 yao:1 again:2 aaai:1 ambiguity:1 containing:2 huang:1 emnlp:1 zhao:2 nonoverlapping:1 explicitly:1 unannotated:2 vi:1 later:1 try:3 start:5 competitive:1 complicated:3 annotation:3 jia:1 contribution:1 oi:4 accuracy:1 efficiently:1 correspond:1 sitting:2 identify:1 generalize:1 bayesian:2 unifies:1 comp:1 manual:3 definition:2 against:3 energy:2 pp:8 associated:1 mi:2 gain:16 dataset:19 recall:1 subsection:1 improves:2 segmentation:2 actually:1 appears:1 restarts:1 specify:4 wei:1 formulation:1 evaluated:4 until:1 hand:3 sketch:2 horizontal:1 parse:2 expressive:2 a7:4 overlapping:2 xa2:1 defines:1 facilitate:1 grounding:1 qmi:1 contain:2 true:4 usa:1 inductive:1 hence:1 assigned:1 regularization:1 ai2:3 iteratively:2 satisfactory:1 illustrated:1 kewei:1 a46:1 during:1 ambiguous:2 rooted:2 o3:3 syntax:1 complete:2 omohundro:1 allen:1 image:39 harmonic:1 ruppin:1 rotation:1 pseudocode:1 empirically:2 discussed:1 synthesized:1 composition:10 cambridge:1 ai:4 rd:6 pmi:1 language:12 specification:2 entail:1 gt:6 base:1 add:2 feb:1 posterior:18 recent:1 freeness:2 optimizes:2 phone:1 scenario:2 perplexity:5 certain:1 n00014:1 binary:1 onr:1 meeting:1 seen:4 fortunately:1 employed:3 purity:4 maximize:1 redundant:1 ii:1 multiple:6 full:3 reduces:1 aggarwal:1 cross:1 divided:2 a1:38 prediction:2 vision:3 a53:1 represent:4 iteration:5 normalization:1 adopting:1 achieved:2 grounded:1 beam:4 addition:2 separately:2 decreased:1 sch:1 unlike:1 simulates:1 jordan:1 call:1 structural:1 intermediate:1 stolcke:1 finish:1 architecture:1 restrict:4 identified:3 competing:2 reduce:3 cow:1 prototype:1 sibling:1 angeles:1 song:1 reconfigurability:2 render:1 speech:2 adios:1 compositional:2 action:18 deep:1 covered:3 oliveira:1 a34:8 induces:2 category:1 simplest:1 reduced:2 generate:2 specifies:2 constituency:1 percentage:2 nsf:2 estimated:2 disjoint:2 yy:1 klein:2 ryoo:1 hyperparameter:1 vol:6 putting:1 four:1 a23:4 graph:1 sum:3 convert:2 run:1 inverse:2 fourteenth:1 powerful:1 uncertainty:1 extends:1 reasonable:1 patch:6 coherence:3 conll:2 comparable:2 bit:1 followed:6 fold:1 activity:5 adapted:1 occur:1 scene:3 x2:2 ucla:1 generates:4 aspect:1 department:1 honavar:2 manning:3 terminates:2 slightly:1 making:2 iccv:3 r3:2 needed:1 end:2 tiling:1 decomposing:1 apply:1 hierarchical:2 occurrence:3 distinguished:1 alternative:3 top:3 denotes:2 running:1 cf:2 remaining:1 a4:17 dirichlet:1 alshawi:1 yx:1 tabulates:1 parsed:1 society:1 tensor:9 objective:1 added:4 occurs:1 posture:1 strategy:3 fa:1 mumford:1 surrogate:1 separate:1 sci:1 entity:4 acoustical:1 trivial:2 induction:1 assuming:1 besides:1 o1:6 modeled:7 illustration:2 ratio:1 liang:1 difficult:1 mostly:1 intent:1 implementation:1 anal:1 pei:2 unknown:2 perform:1 allowing:1 twenty:1 vertical:1 datasets:2 jin:1 accv:1 extended:2 communication:1 frame:2 smoothed:1 august:1 compositionality:2 specified:3 kl:2 sentence:1 california:1 learned:18 nip:1 trans:3 bar:1 usually:1 pattern:29 below:1 indoor:1 sparsity:1 video:9 power:1 event:27 natural:11 rely:1 scarce:2 zhu:9 representing:5 coupled:1 text:1 prior:9 geometric:1 relative:3 bear:1 generation:3 proportional:1 versus:1 validation:1 foundation:1 sufficient:3 viewpoint:1 systematically:1 row:1 prone:2 changed:2 supported:1 free:7 aij:8 formal:1 side:2 bias:1 template:1 face:2 aimi:2 grammatical:1 valid:4 gram:6 stand:4 qn:2 evaluating:1 rich:1 made:2 spitkovsky:1 premature:1 transaction:1 approximate:1 compact:3 sz:3 uai:1 handbook:1 corpus:1 consuming:1 xi:9 subsequence:1 search:17 iterative:2 continuous:1 table:6 learn:8 reasonably:1 robust:2 quantize:1 complex:1 constructing:2 cyk:1 main:2 n2:5 child:2 x1:2 fashion:1 formalization:3 sub:9 position:3 precision:1 a21:3 concatenating:1 exponential:1 candidate:2 comput:1 third:7 minute:1 down:2 erroneous:1 symbol:3 r2:2 chun:1 a3:13 workshop:1 quantization:1 adding:4 merging:1 illustrates:2 occurring:5 likely:1 explore:1 visual:3 kxk:2 contained:1 biclustering:2 springer:1 a8:2 wolf:1 acm:1 ma:1 goal:1 formulated:1 consequently:1 towards:1 replace:1 change:2 specifically:1 determined:1 experimental:4 ya:1 searched:1 support:1 bioinformatics:1 scarcity:2 trainable:1 biol:1 |
4,561 | 5,127 | Rapid Distance-Based Outlier Detection via Sampling
1
Mahito Sugiyama1 Karsten M. Borgwardt1,2
Machine Learning and Computational Biology Research Group, MPIs T?ubingen, Germany
2
Zentrum f?ur Bioinformatik, Eberhard Karls Universit?at T?ubingen, Germany
{mahito.sugiyama,karsten.borgwardt}@tuebingen.mpg.de
Abstract
Distance-based approaches to outlier detection are popular in data mining, as they
do not require to model the underlying probability distribution, which is particularly challenging for high-dimensional data. We present an empirical comparison
of various approaches to distance-based outlier detection across a large number
of datasets. We report the surprising observation that a simple, sampling-based
scheme outperforms state-of-the-art techniques in terms of both efficiency and effectiveness. To better understand this phenomenon, we provide a theoretical analysis why the sampling-based approach outperforms alternative methods based on
k-nearest neighbor search.
1
Introduction
An outlier, which is ?an observation which deviates so much from other observations as to arouse
suspicions that it was generated by a different mechanism? (by Hawkins [10]), appears in many reallife situations. Examples include intrusions in network traffic, credit card frauds, defective products
in industry, and misdiagnosed patients. To discriminate such outliers from normal observations,
machine learning and data mining have defined numerous outlier detection methods, for example,
traditional model-based approaches using statistical tests, convex full layers, or changes of variances and more recent distance-based approaches using k-nearest neighbors [18], clusters [23], or
densities [7] (for reviews, see [1, 13]).
We focus in this paper on the latter, the distance-based approaches, which define outliers as objects
located far away from the remaining objects. More specifically, given a metric space (M, d), each
object x ? M receives a real-valued outlierness score q(x) via a function q : M ? R; q(x)
depends on the distances between x and the other objects in the dataset. Then the top-? objects with
maximum outlierness scores are reported to be outliers. To date, this approach has been successfully
applied in various situations due to its flexibility, that is, it does not require to determine or to fit
an underlying probability distribution, which is often difficult, in particular in high-dimensional
settings. For example, LOF (Local Outlier Factor) [7] has become one of the most popular outlier
detection methods, which measures the outlierness of each object by the difference of local densities
between the object and its neighbors.
The main challenge, however, is its scalability since this approach potentially requires computation
of all pairwise distances between objects in a dataset. This quadratic time complexity leads to
runtime problems on massive datasets that emerge across application domains. To avoid this high
computational cost, a number of techniques have already been proposed, which can be roughly
divided into two strategies: indexing of objects such as tree-based structures [5] or projection-based
structures [9] and partial computation of the pairwise distances to compute scores only for the top-?
outliers, first introduced by Bay and Schwabacher [4] and improved in [6, 16]. Unfortunately, both
strategies are nowadays not sufficient, as index structures are often not efficient enough for highdimensional data [20] and the number of outliers often increases in direct proportion to the size of
the dataset, which significantly deteriorates the efficiency of partial computation techniques.
1
Here we show that a surprisingly simple and rapid sampling-based outlier detection method outperforms state-of-the-art distance-based methods in terms of both efficiency and effectiveness by
conducting an extensive empirical analysis. The proposed method behaves as follows: It takes a
small set of samples from a given set of objects, followed by measuring the outlierness of each object by the distance from the object to its nearest neighbor in the sample set. Intuitively, the sample
set is employed as a telltale set, that is, it serves as an indicator of outlierness, as outliers should
be significantly different from almost all objects by definition, including the objects in the sample
set. The time complexity is therefore linear in the number of objects, dimensions, and samples. In
addition, this method can be implemented in a one-pass manner with constant space complexity as
we only have to store the sample set, which is ideal for analyzing massive datasets.
This paper is organized as follows: In Section 2, we describe our experimental design for the empirical comparison of different outlier detection strategies. In Section 3, we review a number of
state-of-the-art outlier detection methods which we used in our experiments, including our own
proposal. We present experimental results in Section 4 and theoretically analyze them in Section 5.
2
Experimental Design
We present an extensive empirical analysis of state-of-the-art approaches for distance-based outlier
detection and of our new approach, which are introduced in Section 3. They are evaluated in terms
of both scalability and effectiveness on synthetic and real-world datasets. All parameters are set
by referring the original literature or at popular values, which are also shown in Section 3. Note
that these parameters have to be chosen by heuristics in distance-based approaches, while they still
outperform other approaches such as statistical approaches [3].
Environment. We used Ubuntu version 12.04.3 with a single 2.6 GHz AMD Opteron CPU and 512
GB of memory. All C codes were compiled with gcc 4.6.3. All experiments were performed in the
R environment, version 3.0.1.
Evaluation criterion. To evaluate the effectiveness of each method, we used the area under the
precision-recall curve (AUPRC; equivalent to the average precision), which is a typical criterion to
measure the success of outlier detection methods [1]. It takes values from 0 to 1 and 1 is the best
score, and quantifies whether the algorithm is able to retrieve outliers correctly. These values were
calculated by the R ROCR package.
Datasets. We collected 14 real-world datasets from the UCI machine learning repository [2], with
a wide range of sizes and dimensions, whose properties are summarized in Table 1. Most of them
have been intensively used in the outlier detection literature. In particular, KDD1999 is one of
the most popular benchmark datasets in outlier detection, which was originally used for the KDD
Cup 1999. The task is to detect intrusions from network traffic data, and as in [22], objects whose
attribute logged in is positive were chosen as outliers. In every dataset, we first excluded all
categorical attributes and missing values since some methods cannot handle categorical attributes.
For all datasets except for KDD1999, we assume that objects from the smallest class are outliers, as
they are originally designed for classification rather than outlier detection. Three datasets Mfeat,
Isolet, and Optdigits were prepared exactly the same way as [17], where only two similar
classes were used as inliers. All datasets were normalized beforehand, that is, in each dimension,
the feature values were divided by their standard deviation [1, Chapter 12.10].
In addition, we generated two synthetic datasets (Gaussian) using exactly the same procedure
as [14, 17], of which one is high-dimensional (1000 dimensions) and the other is large (10,000,000
objects). For each dataset, inliers (non-outliers) were generated from a Gaussian mixture model with
five equally weighted processes, resulting in five clusters. The mean and the variance of each cluster
was randomly set from the Gaussian distribution N (0, 1), and 30 outliers were generated from a
uniform distribution in the range from the minimum to the maximum values of inliers.
3
Methods for Outlier Detection
In the following, we will introduce the state-of-the-art methods in distance-based outlier detection,
including our new sampling-based method. Every method is formalized as a scoring function q :
M ? R on a metric space (M, d), which assigns a real-valued outlierness score to each object x
2
in a given set of objects X . We denote by n the number of objects in X . If X is multivariate, the
number of dimensions is denoted by m. The number of samples (sample size) is denoted by s.
3.1
The kth-nearest neighbor distance
Knorr and Ng [11, 12] were the first to formalize a distance-based outlier detection scheme, in which
an object x ? X is said to be a DB(?, ?)-outlier if |{x? ? X | d(x, x? ) > ?}| ? ?n, where ? and
? with ?, ? ? R and 0 ? ? ? 1 are parameters specified by the user. This means that at least a
fraction ? of all objects have a distance from x that is larger than ?. This definition has mainly two
significant drawbacks: the difficulty of determining the distance threshold ? in practice and the lack
of a ranking of outliers. To overcome these drawbacks, Ramaswamy et al. [18] proposed to measure
the outlierness by the kth-nearest neighbor (kth-NN) distance. The score qkthNN (x) of an object x
is defined as
qkthNN (x) := dk (x; X ),
where dk (x; X ) is the distance between x and its kth-NN in X . Notice that if we set ? = (n?k)/n,
the set of Knorr and Ng?s DB(?, ?)-outliers coincides with the set {x ? X | qkthNN (x) ? ?}. We
employ qkthNN (x) as a baseline for distance-based methods in our comparison.
Since the na??ve computation of scores qkthNN (x) for all x requires quadratic computational cost, a
number of studies investigated speed-up techniques [4, 6, 16]. We used Bhaduri?s algorithm (called
iORCA) [6] and implemented it in C since it is the latest technique in this branch of research.
It has a parameter k to specify the kth-NN and an additional parameter ? to retrieve the top-?
objects with the largest outlierness scores. We set k = 5, which is a default setting used in the
literature [4, 6, 15, 16], and set ? to be twice the number of outliers for each dataset. Note that in
practice we usually do not know the exact number of outliers and have to set ? large enough.
3.2
Iterative sampling
Wu and Jermaine [21] proposed a sampling-based approach to efficiently approximate the kth-NN
distance score qkthNN . For each object x ? X , define
qkthSp (x) := dk (x; Sx (X )),
where Sx (X ) is a subset of X , which is randomly and iteratively sampled for each object x. In
addition, they introduced a random variable N = |O ? O? | with two sets of top-? outliers O and
O? with respect to qkthNN and qkthSp , and analyzed its expectation E(N ) and the variance Var(N ).
The time complexity is ?(nms). We implemented this method in C and set k = 5 and the sample
size s = 20 unless stated otherwise.
3.3
One-time sampling (our proposal)
Here we present a new sampling-based method. We randomly and independently sample a subset
S(X ) ? X only once and define
qSp (x) := ?min d(x, x? )
x ?S(X )
for each object x ? X . Although this definition is closely related to Wu and Jermaine?s method
qkthSp in the case of k = 1, our method performs sampling only once while their method performs
sampling for each object. We empirically show that this leads to significant differences in accuracy
in outlier detection (see Section 4). We also theoretically analyze this phenomenon to get a better
understanding of its cause (see Section 5). The time complexity is ?(nms) and the space complexity
is ?(ms) using the number of samples s, as this score can be obtained in a one-pass manner. We
implemented this method in C. We set s = 20 for the comparison with other methods.
3.4
Isolation forest
Liu et al. [15] proposed a random forest-like method, called isolation forest. It uses random recursive
partitions of objects, which are assumed to be m-dimensional vectors, and hence is also based on
the concept of proximity. From a given set X , we construct an iTree in the following manner. First
a sample set S(X ) ? X is chosen. Then this sample set is partitioned into two non-empty subsets
3
S(X )L and S(X )R such that S(X )L = { x ? S(X ) | xq < v } and S(X )R = S(X )\S(X )L , where
v and q are randomly chosen. This process is recursively applied to each subset until it becomes a
singleton, resulting in a proper binary tree such that the number of nodes is 2s ? 1. The outlierness
of an object x is measured by the path length h(x) on the tree, and the score is normalized and
averaged on t iTrees. Finally, the outlierness score qtree (x) is defined as
qtree (x) := 2?h(x)/c(s) ,
where h(x) is the average of h(x) on t iTrees and c(s) is defined as c(s) := 2H(s?1)?2(s?1)/n,
where H denotes the harmonic number. The overall average and worst case time complexities are
O((s + n)t log s) and O((s + n)ts). We used the official R IsolationForest package1 , whose
core process is implemented in C. We set t = 100 and s = 256, which is the same setting as in [15].
3.5
Local outlier factor (LOF)
While LOF [7] is often referred to as not distance-based but density-based, we still include this
method as it is also based on pairwise distances and is known to be a prominent outlier detection
method. Let N k (x) be the set ?
of k-nearest neighbors of x. The local reachability density of x
is defined as ?(x) := |N k (x)| ( x? ?N k (x) max{ dk (x? , X ), d(x, x? ) })?1 . Then the local outlier
factor (LOF) qLOF (x) is defined as the ratio of the local reachability density of x and the average
of the local reachability densities of its k-nearest neighbors, that is,
(
)
?
qLOF (x) := |N k (x)|?1 x? ?N k (x) ?(x? ) ?(x)?1 .
The time complexity is O(n2 m), which is known to be the main disadvantage of this method. We
implemented this method in C and used the commonly used setting k = 10.
3.6
Angle-based outlier factor (ABOF)
Kriegel et al. [14] proposed to use angles instead of distances to measure outlierness. Let c(x, x? )
be the similarity between vectors x and x? , for example, the cosine similarity. Then c(y ? x, y ? ? x)
should be correlated with the angle of two vectors y and y ? with respect to the the coordinate origin
x. The insight of Kriegel et al. is that if x is an outlier, the variance of angles between pairs of the
remaining objects becomes small. Formally, for an object x ? X define
qABOF (x) := Vary,y? ?X c(y ? x, y ? ? x).
Note that the smaller qABOF (x), the more likely is x to be an outlier, which is in contrast to the
other methods. This method was originally introduced to overcome the ?curse of dimensionality?
in high-dimensional data. However, recently Zimek et al. [24] showed that distance-based methods
such as LOF also work if attributes carry relevant information for outliers. We include several highdimensional datasets in experiments and check whether distance-based methods work effectively.
Although this method is attractive as it is parameter-free, the computational cost is cubic in n. Thus
we use its near-linear approximation algorithm proposed by Pham and Pagh [17]. Their algorithm,
called FastVOA, estimates the first and the second moments of the variance Vary,y? ?X c(y ?x, y ? ?
x) independently using two techniques: random projections and AMS sketches. The latter is a
randomized technique to estimate the second frequency moment of a data stream. The resulting time
complexity is O(tn(m+log n+c1 c2 )), where t is the number of hyperplanes for random projections
and c1 , c2 are the number of repetitions for AMS sketches. We implemented this algorithm in C. We
set t = log n, c1 = 1600, and c2 = 10 as they are shown to be empirically sufficient in [17].
3.7
One-class SVM
The One-class SVM, introduced by Sch?olkopf et al. [19], classifies objects into inliers and outliers
by introducing a hyperplane between them. This classification can be turned into a ranking of
outlierness by considering the signed distance to the separating hyperplane. That is, the further
an object is located in the outlier half space, the more likely it is to be a true outlier. Let X =
{x1 , . . . , xn }. Formally, the score of a vector x with a feature map ? is defined as
qSVM (x) := ? ? (w ? ?(x)),
1
http://sourceforge.net/projects/iforest/
4
(1)
0.55
n
Ionosphere
Arrhythmia
Wdbc
Mfeat
Isolet
Pima
Gaussian*
Optdigits
Spambase
Statlog
Skin
Pamap2
Covtype
Kdd1999
Record
Gaussian*
351
452
569
600
960
768
1000
1688
4601
6435
245057
373161
286048
4898431
5734488
10000000
AUPRC (average)
Table 1: Summary of datasets. Gaussian is synthetic (marked by *) and the other datasets are
collected from the UCI repository (n = number
of objects, m = number of dimensions).
m
# of outliers
126
207
212
200
240
268
30
554
1813
626
50859
125953
2747
703067
20887
30
34
274
30
649
617
8
1000
64
57
36
3
51
10
6
7
20
0.50
qSp
qkthSp
0.45
0.40
5 10
50
200
Number of samples
1000
Figure 1: Average of area under the precisionrecall curves (AUPRCs) over all datasets with respect to changes in number of samples s for qSp
(one-time sampling; our proposal) and qkthSp (iterative sampling by Wu and Jermaine [21]). Note
that the x-axis has logarithmic scale.
where the weight vector w and the offset ? are optimized by the following quadratic program:
1 ?
1
?w?2 +
?i ? ? subject to (w ? ?(xi )) ? ? ? ?i , ?i ? 0
, ??R 2
?n i=1
n
minn
w?F, ??R
with
?n a regularization parameter ?. The term w ? ?(x) in equation (1) can be replaced with
i=1 ?i k(xi , x) using a kernel function k, where ? = (?1 , . . . , ?n ) is used in the dual problem.
We tried ten different values of ? from 0 to 1 and picked up the one maximizing the margin between
negative and positive scores. We used a Gaussian RBF kernel and set its parameter ? by the popular
heuristics [8]. The R kernlab package was used, whose core process is implemented in C.
4
4.1
Experimental Results
Sensitivity in sampling size and sampling scheme
We first analyze the parameter sensitivity of our method qSp with respect to changes in the sample
size s. In addition, for each sample size we compare our qSp (one-time sampling) to Wu and Jermaine?s qkthSp (iterative sampling). We set k = 1 in qkthSp , hence the only difference between them
was the sampling scheme. Each method was applied to each dataset listed in Table 1 and the average
of AUPRCs (area under the precision-recall curves) in 10 trials were obtained, and these were again
averaged over all datasets. These scores with varying sample sizes are plotted in Figure 1.
Our method shows robust performance over all sample sizes from 5 to 1000 and the average AUPRC
varies by less than 2%. Interestingly, the score is maximized at a rather small sample size (s = 20)
and monotonically (slightly) decreases with increasing sample size. Moreover, for every sample
size, the one-time sampling qSp significantly outperforms the iterative sampling qkthSp (Wilcoxon
signed-rank test, ? = 0.05). We checked that this behavior is independent from dataset size.
4.2
Scalability and effectiveness
Next we evaluate the scalability and effectiveness of the approaches introduced in Section 3 by
systematically applying them to every dataset. Results of running time and AUPRCs are shown in
Table 2 and Table 3, respectively. As we can see, our method qSp is the fastest among all methods;
it can score more than five million objects within a few seconds. Although the time complexity of
Wu and Jermaine?s qkthSp is the same as qSp , our method is empirically much faster, especially in
large datasets. The different costs of two processes, sampling once and performing nearest neighbor
5
Table 2: Running time (in seconds). Averages in 10 trials are shown in four probabilistic methods
qkthSp , qSp , qtree , and qABOF . ??? means that computation did not completed within 2 months.
Ionosphere
Arrhythmia
Wdbc
Mfeat
Isolet
Pima
Gaussian
Optdigits
Spambase
Statlog
Skin
Pamap2
Covtype
Kdd1999
Record
Gaussian
qkthNN
qkthSp
qSp
qtree
qLOF
qABOF
qSVM
2.00?10?2
2.56?10?1
7.20?10?2
1.04
4.27
4.00?10?2
4.18
1.04
9.51
6.99
6.82?103
9.05?104
6.87?102
2.68?106
3.62?106
3.37?103
9.60?10?3
2.72?10?2
1.60?10?2
6.00?10?2
8.68?10?2
2.04?10?2
2.13?10?1
7.48?10?2
7.26?10?1
2.03?10?1
2.12?101
3.27?101
2.16?101
4.40?102
9.58?102
1.73?103
8.00?10?4
1.52?10?2
2.00?10?3
4.80?10?2
8.37?10?2
4.00?10?4
1.54?10?1
1.48?10?2
3.68?10?2
2.80?10?2
9.72?10?2
2.73
2.83?10?1
3.46
4.11
2.13?101
6.25?10?1
2.72
7.32?10?1
8.69
9.71
3.14?10?1
2.10?101
8.65?10?1
1.02
9.35?10?1
3.04
1.20?101
6.15
4.78?101
8.84?101
3.26?102
2.40?10?2
2.04?10?1
6.80?10?2
1.02
4.61
9.20?10?2
2.61?101
1.46
1.14?101
1.68?101
1.38?104
1.37?105
3.67?104
?
?
?
4.72
6.19
7.86
8.26
1.38?101
1.07?101
1.46?101
2.41?101
7.75?101
1.07?102
7.33?103
1.71?104
1.13?104
2.40?105
1.07?106
1.47?106
6.80?10?2
3.88?10?1
9.20?10?2
1.90
3.60
9.60?10?2
7.77
1.14
8.77
1.39?101
9.44?103
8.37?104
1.69?104
?
?
?
Table 3: Area under the precision-recall curve (AUPRC). Averages?SEMs in 10 trials are shown
in four probabilistic methods. Best scores are denoted in Bold. Note that the root mean square
deviation (RMSD) rewards methods that are always close to the best result on each dataset.
qkthNN
qkthSp
qSp
qtree
qLOF
qABOF
qSVM
Ionosphere
Arrhythmia
Wdbc
Mfeat
Isolet
Pima
Gaussian
Optdigits
Spambase
Statlog
Skin
Pamap2
Covtype
Kdd1999
Record
Gaussian
0.931
0.701
0.607
0.217
0.380
0.519
1.000
0.204
0.395
0.057
0.195
0.249
0.016
0.768
0.002
1.000
0.762?0.007
0.674?0.008
0.226?0.001
0.293?0.002
0.175?0.001
0.608?0.007
1.000?0.000
0.319?0.001
0.418?0.001
0.058?0.000
0.146?0.000
0.328?0.000
0.058?0.001
0.081?0.000
0.411?0.000
0.999?0.000
0.899?0.032
0.711?0.005
0.667?0.036
0.245?0.031
0.535?0.138
0.512?0.010
1.000?0.000
0.233?0.021
0.422?0.011
0.082?0.008
0.353?0.058
0.268?0.009
0.075?0.034
0.611?0.098
0.933?0.013
1.000?0.000
0.871?0.002
0.681?0.004
0.595?0.018
0.270?0.009
0.328?0.011
0.441?0.003
0.934?0.036
0.295?0.010
0.419?0.011
0.060?0.002
0.242?0.003
0.252?0.001
0.017?0.001
0.389?0.007
0.976?0.004
0.890?0.022
0.864
0.673
0.428
0.369
0.274
0.406
0.904
0.361
0.354
0.093
0.130
0.338
0.010
?
?
?
0.740?0.022
0.697?0.005
0.490?0.014
0.211?0.003
0.520?0.034
0.461?0.008
0.994?0.005
0.255?0.006
0.398?0.002
0.054?0.000
0.258?0.006
0.231?0.002
0.087?0.005
0.539?0.020
0.658?0.106
0.893?0.003
0.794
0.707
0.556
0.257
0.439
0.461
1.000
0.266
0.399
0.056
0.213
0.235
0.095
?
?
?
Average
Avg.Rank
RMSD
0.453
3.750
0.259
0.410
3.875
0.274
0.534
2.188
0.068
0.479
3.875
0.133
0.400
4.538
0.152
0.468
4.563
0.140
0.421
4.000
0.094
search versus re-sampling per object and performing kth-NN search, causes this difference. The
baseline qkthNN shows acceptable runtimes for large data only if the number of outliers is small.
In terms of effectiveness, qSp shows the best performance on seven out of sixteen datasets including
the high-dimensional datasets, resulting in the best average AUPRC score, which is significantly
higher than every single method except for qLOF (Wilcoxon signed-rank test, ? = 0.05). The
method qSp also shows the best performance in terms of the average rank and RMSDs (root mean
square deviations) to the best result on each dataset. Moreover, qSp is inferior to the baseline qkthNN
only on three datasets. It is interesting that qtree , which also uses one-time sampling like our method,
shows better performance than exhaustive methods on average. In contrast, qkthSp with iterative
sampling is worst in terms of RMSD among all methods.
Based on these observations we can conclude that (1) small sample sizes lead to the maximum
average precision for qSp ; (2) one-time sampling leads to better results than iterative sampling; (3)
one-time sampling leads to better results than exhaustive methods and is also much faster.
6
5
Theoretical Analysis
To understand why our new one-time sampling method qSp shows better performance than the other
methods, we present a theoretical analysis to get answers to the following four questions: (1) What
is the probability that qSp will correctly detect outliers? (2) Why do small sample sizes lead to better
results in qSp ? (3) Why is qSp superior to qkthSp ? (4) Why is qSp superior to qkthNN ? Here we use
the notion of Knorr and Ng?s DB(?, ?)-outliers [11, 12] and denote the set of DB(?, ?)-outliers by
X (?; ?), that is, an object x ? X (?; ?) if |{ x? ? X | d(x, x? ) > ? }| ? ?n holds. We also define
X (?; ?) = X \ X (?; ?) and, for simplicity, we call an element in X (?; ?) an outlier and that in
X (?; ?) an inlier unless otherwise noted. Our method requires as input only the sample size s in
practice, whereas the parameters ? and ? are used only in our theoretical analysis. In the following,
we always assume that s ? n, hence the sampling process is treated as with replacement.
Probabilistic analysis of qSp . First we introduce a partition of inliers into subsets (clusters) using
the threshold ?. A ?-partition P ? of X (?; ?) is defined as a set of non-empty disjoint?subsets of
X (?; ?) such that each element (cluster) C ? P ? satisfies maxx,x? ?C d(x, x? ) < ? and C?P ? C =
X (?; ?). Then if we focus on a cluster C ? P ? , the probability of discriminating an outlier from
inliers contained in C can be bounded from below. Remember that s is the number of samples.
Theorem 1 For an outlier x ? X (?; ?) and a cluster C ? P ? , we have
(
)
Pr ?x? ? C, qSp (x) > qSp (x? ) ? ?s (1 ? ? s ) with ? = (n ? |C|)/n.
(2)
Proof. We have the probability Pr(qSp (x) > ?) = ?s from the definition of outliers. Moreover,
if at least one object is sampled from the cluster C, qSp (x? ) < ? holds for all x? ? C. Thus
Pr(?x? ? C, qSp (x? ) < ?) = 1 ? ? s . Inequality (2) therefore follows.
For instance, if we assume that 5% of our data are outliers and fix ? to be 0.95, we have (maximum
?, mean of ?) = (10.51, 0.50), (44.25, 2.23 ? 10?3 ), (10.93, 0.67), (37.10, 0.75), and (36.37, 0.80)
on our first five datasets from Table 1 to achieve this 5% rate of outliers. These ? were obtained by
greedily searching each cluster in P ? under ? = 0.95 and the respective maximum ?.
Next we consider the task of correctly discriminating an outlier from all inliers. This can be achieved
if for each cluster C ? P ? at least one object x ? C is chosen in the sampling process. Thus the
lower bound can be directly derived using the multinomial distribution as follows.
Theorem 2 Let P ? = {C1 , . . . , Cl } with l clusters and pi = |Ci | / n for each i ? {1, . . . , l}. For
every outlier x ? X (?; ?) and the sample size s ? l, we have
?
(
)
Pr ?x? ? X (?; ?), qSp (x) > qSp (x? ) ? ?s
f (s1 , . . . , sl ; s, p1 , . . . , pl ),
?i;si ?0
where f is the probability mass function of the multinomial distribution defined as
?l
?l
?l
f (s1 , . . . , sl ; s, p1 , . . . , pl ) := (s!/ i=1 si !) i=1 psi i with i=1 si = s.
Furthermore, let I(?; ?) be a subset of X (?; ?) such that minx? ?I(?;?) d(x, x? ) > ? for every
outlier x ? X (?; ?) and assume that P ? is a ?-partition of I(?; ?) instead of all inliers X (?; ?). If
S(X ) ? I(?; ?) and at least one object is sampled from each cluster C ? P ? , qSp (x) > qSp (x? )
holds for all pairs of an outlier x and an inlier x? .
Theorem 3 Let P ? = {C1 , . . . , Cl } be a ?-partition of I(?; ?) and ? = |I(?; ?)| / n, and assume
that pi = |Ci | / |I(?; ?)| for each i ? {1, . . . , l}. For every s ? l,
?
(
)
Pr ?x ? X (?; ?), ?x? ? X (?; ?), qSp (x) > qSp (x? ) ? ? s
f (s1 , . . . , sl ; s, p1 , . . . , pl ).
?i;si ?0
From the fact that this theorem holds for any ?-partition, we automatically have the maximum lower
bound over all possible ?-partitions.
?
Corollary 1 Let ?(s) = ?i;si ?0 f (s1 , . . . , sl ; s, p1 , . . . , pl ) given in Theorem 3. We have
(
)
Pr ?x ? X (?; ?), ?x? ? X (?; ?), qSp (x) > qSp (x? ) ? ? s max ?(s).
(3)
P?
7
Let B(?; ?) be the right-hand side of Inequality (3) above. This bound is maximized for equally
sized clusters when l is fixed and it shows high probability for large ?. For example if ? = 0.99,
we have (l, optimal s, B(?; ?)) = (2, 7, 0.918), (3, 12, 0.866), and (4, 17, 0.818). It is notable that
the bound B(?; ?) is independent of the actual number of outliers and inliers, which is a desirable
property when analyzing large datasets. Although it is dependent on the number of clusters l, the
best (minimum) l which maximizes B(?; ?) with the simplest clustering is implicitly chosen in qSp .
Theoretical support for small sample sizes. Let g(s) = ?s (1 ? ? s ), which is the right-hand side
of Inequality (2). From the differentiation dg/ds, we can see that this function is maximized at
(
)
s = log? log ?/(log ? + log ?) ,
with the natural assumption 0 < ? < ? < 1 and this optimal sample size s is small for large ?
and small ?, for example, s = 6 for (?, ?) = (0.99, 0.5) and s = 24 for (?, ?) = (0.999, 0.8).
Moreover, as we already saw above the bound B(?; ?) is also maximized at such small sample
sizes for large ?. This could be the reason why qSp works well for small sample sizes, as these are
common values for ?, ?, and ? in outlier detection.
Comparison with qkthSp . Define Z(x, x? ) := Pr(qkthSp (x) > qkthSp (x? )) for the iterative sampling method qkthSp . Since we repeat sampling for each object in qkthSp , probability Z(x, x? ) for
each x? ? X (?; ?) is independent with respect to a fixed x ? X (?; ?). We therefore have
?
(
)
Z(x, x? ).
Pr ?x ? X (?; ?), ?x? ? X (?; ?), qkthSp (x) > qkthSp (x? ) ? min
x?X (?;?)
x? ?X (?;?)
Although Z(x, x? ) is typically close to 1 in outlier detection, the overall probability rapidly decreases if n is large. Thus the performance suffers on large datasets. In contrast, our one-time
sampling qSp does not have independence, resulting in our results (Theorem 1, 2, 3, and Corollary 1) instead of this upper bound, which often lead to higher probability. This fact might be the
reason why qkthSp empirically performs significantly worse than qSp and shows the worst RMSD.
Comparison with qkthNN . Finally, let us consider the situation in which there exists the set of ?true?
outliers O ? X given by an oracle. Let ? = {k ? N | qkthNN (x) > qkthNN (x? ) for all x ? O and
x? ? X \ O}, the set of ks with which we can detect all outliers, and assume that ? ?= ?. Then
(
)
Pr ?x ? O, ?x? ? X \ O, qSp (x) > qSp (x? ) ?
max
B(?; ?)
k??, ???(k)
with ?(k) = {? ? R | X (?; ?) = O} if we set ? = (n ? k)/n. Notice that ? is determined from ?
(i.e. k) and ?. Thus both k and ? are implicitly optimized in qSp . In contrast, in qkthNN the number
k is specified by the user. For example, if ? is small, it is hardly possible to choose k ? ? without
any prior knowledge, resulting in overlooking some outliers, while qSp always has the possibility
to detect them without knowing ? if I(?; ?) is non-empty for some ?. This difference in detection
ability could be a reason why qSp significantly outperforms qkthNN on average.
6
Conclusion
In this study, we have performed an extensive set of experiments to compare current distance-based
outlier detection methods. We have observed that a surprisingly simple sampling-based approach,
which we have newly proposed here, outperforms other state-of-the-art distance-based methods.
Since the approach reached its best performance with small sample sizes, it achieves dramatic speedups compared to exhaustive methods and is faster than all state-of-the-art methods for distance-based
outlier detection. We have also presented a theoretical analysis to understand why such a simple
strategy works well and outperforms the popular approach based on kth-NN distances.
To summarize, our contribution is not only to overcome the scalability issue of the distance-based
approach to outlier detection using the sampling strategy but also, to the best of our knowledge,
to give the first thorough experimental comparison of a broad range of recently proposed distancebased outlier detection methods. We are optimistic that these results will contribute to the further
improvement of outlier detection techniques.
Acknowledgments. M.S. is funded by the Alexander von Humboldt Foundation. The research of
Professor Dr. Karsten Borgwardt was supported by the Alfried Krupp Prize for Young University
Teachers of the Alfried Krupp von Bohlen und Halbach-Stiftung.
8
References
[1] Aggarwal, C. C. Outlier Analysis. Springer, 2013.
[2] Bache, K. and Lichman, M. UCI machine learning repository, 2013.
[3] Bakar, Z. A., Mohemad, R., Ahmad, A., and Deris, M. M. A comparative study for outlier detection techniques in data mining. In Proceedings of IEEE International Conference on Cybernetics and Intelligent
Systems, 1?6, 2006.
[4] Bay, S. D. and Schwabacher, M. Mining distance-based outliers in near linear time with randomization and a simple pruning rule. In Proceedings of the 9th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 29?38, 2003.
[5] Berchtold, S., Keim, D. A., and Kriegel, H.-P. The X-tree: An index structure for high-dimensional data.
In Proceedings of the 22th International Conference on Very Large Data Bases, 28?39, 1996.
[6] Bhaduri, K., Matthews, B. L., and Giannella, C. R. Algorithms for speeding up distance-based outlier
detection. In Proceedings of the 17th ACM SIGKDD Conference on Knowledge Discovery and Data
Mining, 859?867, 2011.
[7] Breunig, M. M., Kriegel, H.-P., Ng, R. T., and Sander, J. LOF: Identifying density-based local outliers.
In Proceedings of the ACM SIGMOD International Conference on Management of Data, 93?104, 2000.
[8] Caputo, B., Sim, K., Furesjo, F., and Smola, A. Appearance-based object recognition using SVMs:
Which kernel should I use? In Proceedings of NIPS Workshop on Statistical Methods for Computational
Experiments in Visual Processing and Computer Vision, 2002.
[9] de Vries, T., Chawla, S., and Houle, M. E. Density-preserving projections for large-scale local anomaly
detection. Knowledge and Information Systems, 32(1):25?52, 2012.
[10] Hawkins, D. Identification of Outliers. Chapman and Hall, 1980.
[11] Knorr, E. M. and Ng, R. T. Algorithms for mining distance-based outliers in large datasets. In Proceedings
of the 24rd International Conference on Very Large Data Bases, 392?403, 1998.
[12] Knorr, E. M., Ng, R. T., and Tucakov, V. Distance-based outliers: algorithms and applications. The VLDB
Journal, 8(3):237?253, 2000.
[13] Kriegel, H.-P., Kr?oger, P., and Zimak, A. Outlier detection techniques. Tutorial at 16th ACM SIGKDD
Conference on Knowledge Discovery and Data Mining, 2010.
[14] Kriegel, H.-P., Schubert, M., and Zimek, A. Angle-based outlier detection in high-dimensional data.
In Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, 444?452, 2008.
[15] Liu, F. T., Ting, K. M., and Zhou, Z. H. Isolation-based anomaly detection. ACM Transactions on
Knowledge Discovery from Data, 6(1):3:1?3:39, 2012.
[16] Orair, G. H., Teixeira, C. H. C., Wang, Y., Meira Jr., W., and Parthasarathy, S. Distance-based outlier
detection: consolidation and renewed bearing. PVLDB, 3(2):1469?1480, 2010.
[17] Pham, N. and Pagh, R. A near-linear time approximation algorithm for angle-based outlier detection in
high-dimensional data. In Proceedings of the 18th ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, 877?885, 2012.
[18] Ramaswamy, S., Rastogi, R., and Shim, K. Efficient algorithms for mining outliers from large data sets.
In Proceedings of the ACM SIGMOD International Conference on Management of Data, 427?438, 2000.
[19] Sch?olkopf, B., Platt, J. C., Shawe-Taylor, J., Smola, A. J., and Williamson, R. C. Estimating the support
of a high-dimensional distribution. Neural computation, 13(7):1443?1471, 2001.
[20] Weber, R., Schek, H.-J., and Blott, S. A quantitative analysis and performance study for similarity-search
methods in high-dimensional spaces. In Proceedings of the International Conference on Very Large Data
Bases, 194?205, 1998.
[21] Wu, M. and Jermaine, C. Outlier detection by sampling with accuracy guarantees. In Proceedings of
the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 767?772,
2006.
[22] Yamanishi, K., Takeuchi, J., Williams, G., and Milne, P. On-line unsupervised outlier detection using
finite mixtures with discounting learning algorithms. Data Mining and Knowledge Discovery, 8(3):275?
300, 2004.
[23] Yu, D., Sheikholeslami, G., and Zhang, A. FindOut: Finding outliers in very large datasets. Knowledge
and Information Systems, 4(4):387?412, 2002.
[24] Zimek, A., Schubert, E., and Kriegel, H.-P. A survey on unsupervised outlier detection in highdimensional numerical data. Statistical Analysis and Data Mining, 5(5):363?387, 2012.
9
| 5127 |@word trial:3 repository:3 version:2 proportion:1 vldb:1 tried:1 dramatic:1 recursively:1 schwabacher:2 carry:1 moment:2 liu:2 zimek:3 score:19 lichman:1 renewed:1 interestingly:1 outperforms:7 spambase:3 current:1 surprising:1 si:5 numerical:1 partition:7 kdd:1 designed:1 half:1 ubuntu:1 pvldb:1 prize:1 core:2 record:3 node:1 contribute:1 hyperplanes:1 zhang:1 five:4 c2:3 direct:1 become:1 schek:1 introduce:2 manner:3 theoretically:2 pairwise:3 karsten:3 behavior:1 mpg:1 arrhythmia:3 p1:4 rapid:2 roughly:1 itree:1 automatically:1 cpu:1 curse:1 actual:1 considering:1 increasing:1 becomes:2 project:1 classifies:1 underlying:2 moreover:4 bounded:1 mass:1 estimating:1 maximizes:1 what:1 finding:1 differentiation:1 guarantee:1 remember:1 every:8 thorough:1 quantitative:1 runtime:1 exactly:2 universit:1 platt:1 positive:2 local:9 analyzing:2 path:1 signed:3 might:1 twice:1 k:1 challenging:1 fastest:1 range:3 averaged:2 acknowledgment:1 outlierness:12 practice:3 recursive:1 precisionrecall:1 procedure:1 area:4 empirical:4 maxx:1 significantly:6 projection:4 fraud:1 auprc:5 get:2 cannot:1 close:2 applying:1 equivalent:1 map:1 missing:1 maximizing:1 latest:1 williams:1 independently:2 convex:1 survey:1 formalized:1 simplicity:1 assigns:1 identifying:1 insight:1 isolet:4 rule:1 retrieve:2 handle:1 notion:1 coordinate:1 searching:1 massive:2 user:2 exact:1 anomaly:2 us:2 breunig:1 origin:1 element:2 distancebased:1 recognition:1 particularly:1 located:2 bache:1 observed:1 rmsd:4 wang:1 worst:3 decrease:2 ahmad:1 environment:2 und:1 complexity:10 reward:1 efficiency:3 various:2 chapter:1 overlooking:1 describe:1 exhaustive:3 whose:4 heuristic:2 larger:1 valued:2 otherwise:2 ability:1 net:1 product:1 uci:3 relevant:1 turned:1 date:1 rapidly:1 flexibility:1 achieve:1 scalability:5 olkopf:2 sourceforge:1 cluster:14 empty:3 comparative:1 yamanishi:1 object:44 inlier:2 measured:1 nearest:8 stiftung:1 package1:1 sim:1 implemented:8 qtree:6 reachability:3 drawback:2 closely:1 attribute:4 opteron:1 humboldt:1 require:2 fix:1 randomization:1 statlog:3 pl:4 pham:2 proximity:1 hawkins:2 credit:1 lof:6 normal:1 hold:4 hall:1 matthew:1 vary:2 achieves:1 smallest:1 saw:1 largest:1 repetition:1 successfully:1 weighted:1 gaussian:11 always:3 rather:2 avoid:1 zhou:1 varying:1 corollary:2 derived:1 focus:2 improvement:1 rank:4 check:1 mainly:1 intrusion:2 contrast:4 sigkdd:6 greedily:1 baseline:3 detect:4 teixeira:1 am:2 dependent:1 nn:6 typically:1 misdiagnosed:1 germany:2 schubert:2 issue:1 overall:2 classification:2 dual:1 among:2 denoted:3 art:7 once:3 construct:1 ng:6 sampling:36 runtimes:1 biology:1 chapman:1 broad:1 yu:1 unsupervised:2 report:1 intelligent:1 employ:1 few:1 randomly:4 dg:1 ve:1 zentrum:1 replaced:1 replacement:1 detection:37 mining:14 possibility:1 evaluation:1 mixture:2 analyzed:1 inliers:9 beforehand:1 nowadays:1 partial:2 respective:1 unless:2 tree:4 taylor:1 re:1 plotted:1 theoretical:6 instance:1 industry:1 disadvantage:1 zimak:1 measuring:1 cost:4 introducing:1 deviation:3 subset:7 uniform:1 reported:1 answer:1 varies:1 teacher:1 synthetic:3 oger:1 referring:1 borgwardt:2 international:9 sensitivity:2 discriminating:2 eberhard:1 density:8 randomized:1 probabilistic:3 pagh:2 na:1 again:1 von:2 nm:2 management:2 choose:1 dr:1 worse:1 de:2 singleton:1 summarized:1 bold:1 notable:1 ranking:2 depends:1 stream:1 performed:2 root:2 ramaswamy:2 picked:1 optimistic:1 analyze:3 traffic:2 reached:1 contribution:1 square:2 takeuchi:1 accuracy:2 variance:5 conducting:1 efficiently:1 maximized:4 rastogi:1 identification:1 cybernetics:1 suffers:1 checked:1 definition:4 frequency:1 proof:1 psi:1 sampled:3 newly:1 dataset:11 popular:6 intensively:1 recall:3 knowledge:12 dimensionality:1 organized:1 formalize:1 appears:1 originally:3 higher:2 specify:1 improved:1 evaluated:1 furthermore:1 smola:2 until:1 d:1 sketch:2 receives:1 hand:2 lack:1 normalized:2 concept:1 true:2 krupp:2 hence:3 regularization:1 discounting:1 excluded:1 iteratively:1 alfried:2 attractive:1 sugiyama1:1 inferior:1 noted:1 coincides:1 cosine:1 criterion:2 m:1 prominent:1 tn:1 performs:3 karls:1 weber:1 harmonic:1 recently:2 superior:2 common:1 behaves:1 multinomial:2 empirically:4 million:1 significant:2 cup:1 rd:1 sugiyama:1 shawe:1 funded:1 similarity:3 compiled:1 base:3 wilcoxon:2 multivariate:1 own:1 recent:1 showed:1 store:1 ubingen:2 inequality:3 binary:1 success:1 iforest:1 sems:1 scoring:1 preserving:1 minimum:2 additional:1 employed:1 determine:1 monotonically:1 branch:1 full:1 desirable:1 halbach:1 aggarwal:1 faster:3 divided:2 equally:2 qsp:43 patient:1 metric:2 expectation:1 vision:1 kernel:3 achieved:1 c1:5 proposal:3 addition:4 whereas:1 sch:2 subject:1 db:4 effectiveness:7 call:1 near:3 ideal:1 enough:2 sander:1 independence:1 fit:1 isolation:3 knowing:1 whether:2 gb:1 cause:2 hardly:1 listed:1 prepared:1 ten:1 svms:1 simplest:1 http:1 outperform:1 sl:4 tutorial:1 notice:2 deteriorates:1 disjoint:1 correctly:3 per:1 group:1 four:3 keim:1 threshold:2 fraction:1 package:2 angle:6 logged:1 almost:1 wu:6 acceptable:1 layer:1 bound:6 followed:1 quadratic:3 bohlen:1 oracle:1 mfeat:4 speed:1 min:2 performing:2 speedup:1 meira:1 jr:1 across:2 smaller:1 slightly:1 ur:1 partitioned:1 s1:4 outlier:89 intuitively:1 indexing:1 pr:9 equation:1 mechanism:1 know:1 serf:1 away:1 chawla:1 alternative:1 original:1 top:4 remaining:2 include:3 denotes:1 running:2 completed:1 clustering:1 sigmod:2 ting:1 especially:1 skin:3 already:2 question:1 strategy:5 berchtold:1 traditional:1 said:1 minx:1 kth:8 distance:37 card:1 separating:1 amd:1 seven:1 collected:2 tuebingen:1 reason:3 code:1 length:1 index:2 minn:1 ratio:1 difficult:1 unfortunately:1 potentially:1 pima:3 stated:1 negative:1 gcc:1 design:2 proper:1 upper:1 observation:5 datasets:25 benchmark:1 finite:1 t:1 situation:3 introduced:6 mahito:2 pair:2 specified:2 extensive:3 optimized:2 milne:1 nip:1 able:1 kriegel:7 usually:1 below:1 challenge:1 summarize:1 program:1 including:4 memory:1 max:3 difficulty:1 treated:1 natural:1 indicator:1 scheme:4 numerous:1 suspicion:1 axis:1 categorical:2 speeding:1 xq:1 deviate:1 review:2 literature:3 understanding:1 prior:1 discovery:8 parthasarathy:1 determining:1 shim:1 interesting:1 var:1 versus:1 sixteen:1 foundation:1 sufficient:2 systematically:1 pi:2 summary:1 consolidation:1 surprisingly:2 repeat:1 free:1 supported:1 side:2 understand:3 neighbor:9 wide:1 emerge:1 ghz:1 curve:4 dimension:6 calculated:1 world:2 overcome:3 default:1 xn:1 commonly:1 avg:1 far:1 transaction:1 approximate:1 pruning:1 implicitly:2 bhaduri:2 assumed:1 conclude:1 xi:2 search:4 iterative:7 quantifies:1 bay:2 why:9 table:8 robust:1 forest:3 caputo:1 bearing:1 investigated:1 cl:2 williamson:1 domain:1 knorr:5 official:1 did:1 main:2 n2:1 defective:1 x1:1 referred:1 cubic:1 precision:5 jermaine:6 young:1 theorem:6 offset:1 dk:4 svm:2 covtype:3 ionosphere:3 exists:1 workshop:1 effectively:1 kr:1 ci:2 vries:1 sx:2 margin:1 wdbc:3 logarithmic:1 likely:2 appearance:1 visual:1 contained:1 springer:1 blott:1 satisfies:1 acm:9 optdigits:4 marked:1 month:1 sized:1 rbf:1 professor:1 change:3 specifically:1 typical:1 except:2 determined:1 hyperplane:2 called:3 discriminate:1 pas:2 experimental:5 formally:2 highdimensional:3 support:2 latter:2 alexander:1 evaluate:2 phenomenon:2 correlated:1 |
4,562 | 5,128 | One-shot learning by inverting a compositional causal
process
Ruslan Salakhutdinov
Dept. of Statistics and Computer Science
University of Toronto
[email protected]
Brenden M. Lake
Dept. of Brain and Cognitive Sciences
MIT
[email protected]
Joshua B. Tenenbaum
Dept. of Brain and Cognitive Sciences
MIT
[email protected]
Abstract
People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the
same problems. Here we present a Hierarchical Bayesian model based on compositionality and causality that can learn a wide range of natural (although simple) visual concepts, generalizing in human-like ways from just one image. We
evaluated performance on a challenging one-shot classification task, where our
model achieved a human-level error rate while substantially outperforming two
deep learning models. We also tested the model on another conceptual task, generating new examples, by using a ?visual Turing test? to show that our model
produces human-like performance.
1
Introduction
People can acquire a new concept from only the barest of experience ? just one or a handful of
examples in a high-dimensional space of raw perceptual input. Although machine learning has
tackled some of the same classification and recognition problems that people solve so effortlessly,
the standard algorithms require hundreds or thousands of examples to reach good performance.
While the standard MNIST benchmark dataset for digit recognition has 6000 training examples per
class [19], people can classify new images of a foreign handwritten character from just one example
(Figure 1b) [23, 16, 17]. Similarly, while classifiers are generally trained on hundreds of images per
class, using benchmark datasets such as ImageNet [4] and CIFAR-10/100 [14], people can learn a
a)
b)
c)
Human drawers
3
3
canonical
Figure 1: Can you learn a new concept from just one example? (a & b) Where are the other examples of the
concept shown in red? Answers for b) are row 4 column 3 (left) and row 2 column 4 (right). c) The learned
concepts also support many other abilities such as generating examples and parsing.
1
2
5.1
2
1
2
3
1
3
6.2
6.2
1
1
3
5
3
1
2
1
1
2
1
2
7.1
1
2
1
Figure 2: Four alphabets from Omniglot, each with five characters drawn by four different people.
new visual object from just one example (e.g., a ?Segway? in Figure 1a). These new larger datasets
have developed along with larger and ?deeper? model architectures, and while performance has
steadily (and even spectacularly [15]) improved in this big data setting, it is unknown how this
progress translates to the ?one-shot? setting that is a hallmark of human learning [3, 22, 28].
Additionally, while classification has received most of the attention in machine learning, people
can generalize in a variety of other ways after learning a new concept. Equipped with the concept
?Segway? or a new handwritten character (Figure 1c), people can produce new examples, parse an
object into its critical parts, and fill in a missing part of an image. While this flexibility highlights the
richness of people?s concepts, suggesting they are much more than discriminative features or rules,
there are reasons to suspect that such sophisticated concepts would be difficult if not impossible
to learn from very sparse data. Theoretical analyses of learning express a tradeoff between the
complexity of the representation (or the size of its hypothesis space) and the number of examples
needed to reach some measure of ?good generalization? (e.g., the bias/variance dilemma [8]). Given
that people seem to succeed at both sides of the tradeoff, a central challenge is to explain this
remarkable ability: What types of representations can be learned from just one or a few examples,
and how can these representations support such flexible generalizations?
To address these questions, our work here offers two contributions as initial steps. First, we introduce
a new set of one-shot learning problems for which humans and machines can be compared side-byside, and second, we introduce a new algorithm that does substantially better on these tasks than
current algorithms. We selected simple visual concepts from the domain of handwritten characters,
which offers a large number of novel, high-dimensional, and cognitively natural stimuli (Figure
2). These characters are significantly more complex than the simple artificial stimuli most often
modeled in psychological studies of concept learning (e.g., [6, 13]), yet they remain simple enough
to hope that a computational model could see all the structure that people do, unlike domains such
as natural scenes. We used a dataset we collected called ?Omniglot? that was designed for studying
learning from a few examples [17, 26]. While similar in spirit to MNIST, rather than having 10
characters with 6000 examples each, it has over 1600 character with 20 examples each ? making it
more like the ?transpose? of MNIST. These characters were selected from 50 different alphabets on
www.omniglot.com, which includes scripts from natural languages (e.g., Hebrew, Korean, Greek)
and artificial scripts (e.g., Futurama and ULOG) invented for purposes like TV shows or video
games. Since it was produced on Amazon?s Mechanical Turk, each image is paired with a movie
([x,y,time] coordinates) showing how that drawing was produced.
In addition to introducing new one-shot learning challenge problems, this paper also introduces
Hierarchical Bayesian Program Learning (HBPL), a model that exploits the principles of compositionality and causality to learn a wide range of simple visual concepts from just a single example. We
compared the model with people and other competitive computational models for character recognition, including Deep Boltzmann Machines [25] and their Hierarchical Deep extension for learning
with very few examples [26]. We find that HBPL classifies new examples with near human-level
accuracy, substantially beating the competing models. We also tested the model on generating new
exemplars, another natural form of generalization, using a ?visual Turing test? to evaluate performance. In this test, both people and the model performed the same task side by side, and then other
human participants judged which result was from a person and which was from a machine.
2
Hierarchical Bayesian Program Learning
We introduce a new computational approach called Hierarchical Bayesian Program Learning
(HBPL) that utilizes the principles of compositionality and causality to build a probabilistic generative model of handwritten characters. It is compositional because characters are represented
as stochastic motor programs where primitive structure is shared and re-used across characters at
multiple levels, including strokes and sub-strokes. Given the raw pixels, the model searches for a
2
type level
primitives
R1
x11
(m)
R1
(m)
x11
(m)
L1
2
R2
x12
(m)
y12
(m)
42
x21
y21
= along s11
(m)
(m)
} y11
17
character type 2 (? = 2)
157
z11 = 5
z21 = 42
x12
y12
} y11
(m)
R2
R1
(m)
(m)
L2
(m)
y21
= start of s11
(m)
(m)
L1
(m)
b}
{A, ?,
x21
y21
(m)
x21
(m)
y21
(m)
(m)
T2
T1
T2
R2
x11(m) R2(m)
y11
(m)
L2
R1
(m)
x21
z21 = 17
x11
y11
= independent
(m)
T1
{A, ?,
5
z12 = 17
z11 = 17
= independent
token level ?(m)
...
character type 1 (? = 2)
(m)
b}
I (m)
I (m)
Figure 3: An illustration of the HBPL model generating two character types (left and right), where the dotted
line separates the type-level from the token-level variables. Legend: number of strokes ?, relations R, primitive
id z (color-coded to highlight sharing), control points x (open circles), scale y, start locations L, trajectories T ,
transformation A, noise and ?b , and image I.
?structural description? to explain the image by freely combining these elementary parts and their
spatial relations. Unlike classic structural description models [27, 2], HBPL also reflects abstract
causal structure about how characters are actually produced. This type of causal representation
is psychologically plausible, and it has been previously theorized to explain both behavioral and
neuro-imaging data regarding human character perception and learning (e.g., [7, 1, 21, 11, 12, 17]).
As in most previous ?analysis by synthesis? models of characters, strokes are not modeled at the
level of muscle movements, so that they are abstract enough to be completed by a hand, a foot, or
an airplane writing in the sky. But HBPL also learns a significantly more complex representation
than earlier models, which used only one stroke (unless a second was added manually) [24, 10] or
received on-line input data [9], sidestepping the challenging parsing problem needed to interpret
complex characters.
The model distinguishes between character types (an ?A?, ?B?, etc.) and tokens (an ?A? drawn by a
particular person), where types provide an abstract structural specification for generating different
tokens. The joint distribution on types ?, tokens ?(m) , and binary images I (m) is given as follows,
P (?, ?(1) , ..., ?(M ) , I (1) , ..., I (M ) ) = P (?)
M
Y
m=1
P (I (m) |?(m) )P (?(m) |?).
(1)
Pseudocode to generate from this distribution is shown in the Supporting Information (Section SI-1).
2.1
Generating a character type
A character type ? = {?, S, R} is defined by a set of ? strokes S = {S1 , ..., S? } and spatial relations
R = {R1 , ..., R? } between strokes. The joint distribution can be written as
P (?) = P (?)
?
Y
i=1
P (Si )P (Ri |S1 , ..., Si?1 ).
(2)
The number of strokes is sampled from a multinomial P (?) estimated from the empirical frequencies
(Figure 4b), and the other conditional distributions are defined in the sections below. All hyperparameters, including the library of primitives (top of Figure 3), were learned from a large ?background
set? of character drawings as described in Sections 2.3 and SI-4.
Strokes. Each stroke is initiated by pressing the pen down and terminated by lifting the
pen up. In between, a stroke is a motor routine composed of simple movements called substrokes Si = {si1 , ..., sini } (colored curves in Figure 3), where sub-strokes are separated by
3
brief pauses of the pen. Each sub-stroke sij is modeled as a uniform cubic b-spline, which
can beQdecomposed into three variables sij = {zij , xij , yij } with joint distribution P (Si ) =
ni
P (xij |zij )P (yij |zij ). The discrete class zij ? N is an index into the library of primiP (zi ) j=1
Qni
P (zij |zi(j?1) ) is a
tive motor elements (top of Figure 3), and its distribution P (zi ) = P (zi1 ) j=2
first-order Markov Process that adds sub-strokes at each step until a special ?stop? state is sampled
that ends the stroke. The five control points xij ? R10 (small open circles in Figure 3) are sampled
from a Gaussian P (xij |zij ) = N (?zij , ?zij ) , but they live in an abstract space not yet embedded
in the image frame. The type-level scale yij of this space, relative to the image frame, is sampled
from P (yij |zij ) = Gamma(?zij , ?zij ).
Relations. The spatial relation Ri specifies how the beginning of stroke Si connects to the previous strokes {S1 , ..., Si?1 }. The distribution P (Ri |S1 , ..., Si?1 ) = P (Ri |z1 , ..., zi?1 ), since it
only depends on the number of sub-strokes in each stroke. Relations can come in four types with
probabilities ?R , and each type has different sub-variables and dimensionalities:
? Independent relations, Ri = {Ji , Li }, where the position of stroke i does not depend on previous strokes. The variable Ji ? N is drawn from P (Ji ), a multinomial over a 2D image grid that
depends on index i (Figure 4c). Since the position Li ? R2 has to be real-valued, P (Li |Ji ) is
then sampled uniformly at random from within the image cell Ji .
? Start or End relations, Ri = {ui }, where stroke i starts at either the beginning or end of a
previous stroke ui , sampled uniformly at random from ui ? {1, ..., i ? 1}.
? Along relations, Ri = {ui , vi , ?i }, where stroke i begins along previous stroke ui ? {1, ..., i ?
1} at sub-stroke vi ? {1, ..., nui } at type-level spline coordinate ?i ? R, each sampled uniformly at random.
2.2
Generating a character token
(m)
The token-level variables, ?(m) = {L(m) , x(m) , y (m) , R(m) , A(m) , ?b
(m)
P (?(m) |?) = P (L(m) |?\L(m) , ?)
Y
(m)
P (Ri
i
(m)
|Ri )P (yi
, (m) }, are distributed as
(m)
|yi )P (xi
(m)
|xi )P (A(m) , ?b
, (m) )
(3)
with details below. As before, Sections 2.3 and SI-4 describe how the hyperparameters were learned.
(m)
Pen trajectories. A stroke trajectory Ti
(Figure 3) is a sequence of points in the image plane
(m)
(m)
(m) (m)
that represents the path of the pen. Each trajectory Ti
= f (Li , xi , yi ) is a deterministic
(m)
(m)
function of a starting location Li ? R2 , token-level control points xi ? R10 , and token-level
(m)
scale yi
? R. The control points and scale are noisy versions of their type-level counterparts,
(m)
(m)
P (xij |xij ) = N (xij , ?x2 I) and P (yij |yij ) ? N (yij , ?y2 ), where the scale is truncated below
(m)
0. To construct the trajectory Ti
(see illustration in Figure 3), the spline defined by the scaled
(m) (m)
10
control points y1 x1 ? R is evaluated to form a trajectory,1 which is shifted in the image plane
(m)
(m) (m)
to begin at Li . Next, the second spline y2 x2 is evaluated and placed to begin at the end of
the previous sub-stroke?s trajectory, and so on until all sub-strokes are placed.
(m)
Token-level relations must be exactly equal to their type-level counterparts, P (Ri |Ri ) =
(m)
?(Ri
? Ri ), except for the ?along? relation which allows for token-level variability for
(m)
the attachment along the spline using a truncated Gaussian P (?i |?i ) ? N (?i , ??2 ). Given
(m)
the pen trajectories of the previous strokes, the start position of Li
is sampled from
(m)
(m)
(m)
(m)
(m)
(m)
(m)
(m)
P (Li |Ri , T1 , ..., Ti?1 ) = N (g(Ri , T1 , ..., Ti?1 ), ?L ), where g(?) = Li when Ri is
(m)
(m)
(m)
independent (Section 2.1), g(?) = end(Tui ) or g(?) = start(Tui ) when Ri
(m)
g(?) is the proper spline evaluation when Ri is along.
is start or end, and
1
The number of spline evaluations is computed to be approximately 2 points for every 3 pixels of distance
along the spline (with a minimum of 10 evaluations).
4
a)
b)
library of motor primitives
number
of of
strokes
Number
strokes
frequency
6000
1
1
2
2
4000
2000
0
0
c)
2
4
6
8
stroke start positions
1
1
3
3
2
2
1
2
3
3
4
4
?4
4
4
(m)
3
Figure 4: Learned hyperparameters. a) A subset of
primitives, where the top
row shows the most common ones. The first control point (circle) is a filled.
b&c) Empirical distributions where the heatmap
c) show how starting point
differs by stroke number.
Image. An image transformation A
? R is sampled from P (A(m) ) = N ([1, 1, 0, 0], ?A ),
where the first two elements control a global re-scaling and the second two control a global translation of the center of mass of T (m) . The transformed trajectories can then be rendered as a 105x105
grayscale image, using an ink model adapted from [10] (see Section SI-2). This grayscale image
is then perturbed by two noise processes, which make the gradient more robust during optimization and encourage partial solutions during classification. These processes include convolution with
(m)
a Gaussian filter with standard deviation ?b and pixel flipping with probability (m) , where the
(m)
amount of noise ?b and (m) are drawn uniformly on a pre-specified range (Section SI-2). The
grayscale pixels then parameterize 105x105 independent Bernoulli distributions, completing the full
(m)
model of binary images P (I (m) |?(m) ) = P (I (m) |T (m) , A(m) , ?b , (m) ).
2.3
4
Learning high-level knowledge of motor programs
The Omniglot dataset was randomly split into a 30 alphabet ?background? set and a 20 alphabet
?evaluation? set, constrained such that the background set included the six most common alphabets
as determined by Google hits. Background images, paired with their motor data, were used to learn
the hyperparameters of the HBPL model, including a set of 1000 primitive motor elements (Figure
4a) and position models for a drawing?s first, second, and third stroke, etc. (Figure 4c). Wherever
possible, cross-validation (within the background set) was used to decide issues of model complexity
within the conditional probability distributions of HBPL. Details are provided in Section SI-4 for
learning the models of primitives, positions, relations, token variability, and image transformations.
2.4
Inference
Posterior inference in this model is very challenging, since parsing an image I (m) requires exploring
a large combinatorial space of different numbers and types of strokes, relations, and sub-strokes. We
developed an algorithm for finding K high-probability parses, ? [1] , ?(m)[1] , ..., ? [K] , ?(m)[K] , which
are the most promising candidates proposed by a fast, bottom-up image analysis, shown in Figure
5a and detailed in Section SI-5. These parses approximate the posterior with a discrete distribution,
P (?, ?(m) |I (m) ) ?
K
X
i=1
wi ?(?(m) ? ?(m)[i] )?(? ? ? [i] ),
(4)
where each weight wi is proportional to parse score, marginalizing over shape variables x,
[i]
wi ? w?i = P (?\x , ?(m)[i] , I (m) )
(5)
P
and constrained such that i wi = 1. Rather than using just a point estimate for each parse, the
approximation can be improved by incorporating some of the local variance around the parse. Since
the token-level variables ?(m) , which closely track the image, allow for little variability, and since it
is inexpensive to draw conditional samples from the type-level P (?|?(m)[i] , I (m) ) = P (?|?(m)[i] ) as
it does not require evaluating the likelihood of the image, just the local variance around the type-level
is estimated with the token-level fixed. Metropolis Hastings is run to produce N samples (Section
SI-5.5) for each parse ?(m)[i] , denoted by ? [i1] , ..., ? [iN ] , where the improved approximation is
P (?, ?(m) |I (m) ) ? Q(?, ?(m) , I (m) ) =
K
X
i=1
5
wi ?(?(m) ? ?(m)[i] )
N
1 X
?(? ? ? [ij] ).
N j=1
(6)
Image
Thinned
Binary image
a) i
Binary image
b)
1
2
train
train
train
train
train
train
22222
1
111112
Traced graph (raw)
ii Thinned image
0
?59.6
0
test
test
test
test
test
22test 1
1
2
2
111
22 111
222
1
1 2 test
22222
12
11111
traced graph (cleaned)
?831
planning
train
22 1
1
2
22 11111
22
2
1
12
test
?59.6
?59.6
?59.6
?59.6
?59.6
0
?59.6
0
Thinned image
train
test
00000
0
Thinned image
iii
train
train
Binary image
?2.12e+03
?2.12e+03
?2.12e+03
?2.12e+03
?2.12e+03
?831
2
1 2
?1.98e+03
?1.98e+03
?1.98e+03
?1.98e+03
?1.98e+03
?2.12e+03
?881
1
1
1
22 111111
1
0
?88.9
?59.6
1
22 1
1
111
111
2
222
222
11
?59.6
?159
?88.9
1
12
1
1
1
111111 2 1
?88.9
?168
?159
2
1
1
1
12
?159
?168
1
?168
test
?88.9
?88.9
?88.9
?88.9
?88.9
?59.6
?88.9
0
-60
?881
?2.12e+03
train
-89
2 test 1
1
2
1
2 11
2 11111
?1.41e+03
?1.98e+03
1 ?983
?2.07e+03
?2.07e+03
?2.07e+03
?2.07e+03
?2.07e+03
?1.41e+03
?1.98e+03
?983
?159
?159
?159
?159
?159
?88.9
?159
?59.6
-159
1
22
11
111111
2 222222
11
?1.22e+03
?979
?2.07e+03
?2.09e+03
?2.09e+03
?2.09e+03
?2.09e+03
?2.09e+03
?1.22e+03
?2.07e+03
?979
?168
?168
?168
?168
?168
?159
?168
?88.9
-168
1
1
11111
1
21
1
21
?1.18e+03
?1.17e+03
?2.09e+03
?2.12e+03
?2.12e+03
?2.12e+03
?2.12e+03
?2.12e+03
?1.18e+03
?2.09e+03
?1.17e+03
?168
?159
1
?168
2 1
1
2 1
1
1
?1.72e+03
?2.12e+03
?1.54e+03
?1.72e+03
?2.12e+03
planning
planning
-1273
-831
-2041
Figure 5: Parsing a raw image. a) The raw image (i) is processed by a thinning algorithm [18] (ii) and then
analyzed as an undirected graph [20] (iii) where parses are guided random walks (Section SI-5). b) The five
best parses found for that image (top row) are shown with their log wj (Eq. 5), where numbers inside circles
denote stroke order and starting position, and smaller open circles denote sub-stroke breaks. These five parses
were re-fit to three different raw images of characters (left in image triplets), where the best parse (top right)
and its associated image reconstruction (bottom right) are shown above its score (Eq. 9).
planning cleaned
planning cleaned
planning cleaned
Given an approximate posterior for a particular image, the model can evaluate the posterior predictive score of a new image by re-fitting the token-level variables (bottom Figure 5b), as explained in
Section 3.1 on inference for one-shot classification.
3
Results
3.1
One-shot classification
People, HBPL, and several alternative models were evaluated on a set of 10 challenging one-shot
classification tasks. The tasks tested within-alphabet classification on 10 alphabets, with examples
in Figure 2 and detailed in Section SI-6 . Each trial (of 400 total) consists of a single test image of
a new character compared to 20 new characters from the same alphabet, given just one image each
produced by a typical drawer of that alphabet. Figure 1b shows two example trials.
People. Forty participants in the USA were tested on one-shot classification using Mechanical Turk.
On each trial, as in Figure 1b, participants were shown an image of a new character and asked to
click on another image that shows the same character. To ensure classification was indeed ?one
shot,? participants completed just one randomly selected trial from each of the 10 within-alphabet
classification tasks, so that characters never repeated across trials. There was also an instructions
quiz, two practice trials with the Latin and Greek alphabets, and feedback after every trial.
Hierarchial Bayesian Program Learning. For a test image I (T ) and 20 training images I (c) for
c = 1, ..., 20, we use a Bayesian classification rule for which we compute an approximate solution
argmax log P (I (T ) |I (c) ).
(7)
c
Intuitively, the approximation uses the HBPL search algorithm to get K = 5 parses of I (c) , runs
K MCMC chains to estimate the local type-level variability around each parse, and then runs K
gradient-based searches to re-optimizes the token-level variables ?(T ) (all are continuous) to fit the
test image I (T ) . The approximation
Z can be written as (see Section SI-7 for derivation)
log P (I (T ) |I (c) ) ? log
? log
P (I (T ) |?(T ) )P (?(T ) |?)Q(?(c) , ?, I (c) ) d? d?(c) d?(T )
K
X
i=1
wi max P (I (T ) |?(T ) )
? (T )
N
1 X
P (?(T ) |? [ij] ),
N j=1
(8)
(9)
where Q(?, ?, ?) and wi are from Eq. 6. Figure 5b shows examples of this classification score. While
inference so far involves parses of I (c) refit to I (T ) , it also seems desirable to include parses of I (T )
refit to I (c) , namely P (I (c) |I (T ) ). We can re-write our classification rule (Eq. 7) to include just the
reverse term (Eq. 10 center), and then to include both terms (Eq. 10 right), which is the rule we use,
argmax log P (I (T ) |I (c) ) = argmax log
c
c
P (I (c) |I (T ) )
P (I (c) |I (T ) )
=
argmax
log
P (I (T ) |I (c) ),
P (I (c) )
P (I (c) )
c
(10)
6
?1.54e+03
P
where P (I (c) ) ? i w?i from Eq. 5. These three rules are equivalent if inference is exact, but due
to our approximation, the two-way rule performs better as judged by pilot results.
Affine model. The full HBPL model is compared to a transformation-based approach that models
the variance in image tokens as just global scales, translations, and blur, which relates to congealing
models [23]. This HBPL model ?without strokes? still benefits from good bottom-up image analysis
(Figure 5) and a learned transformation model. The Affine model is identical to HBPL during search,
(m)
but during classification, only the warp A(m) , blur ?b , and noise (m) are re-optimized to a new
(T )
image (change the argument of ?max? in Eq. 9 from ?(T ) to {A(T ) , ?b , (T ) }).
Deep Boltzmann Machines (DBMs). A Deep Boltzmann Machine, with three hidden layers of
1000 hidden units each, was generatively pre-trained on an enhanced background set using the
approximate learning algorithm from [25]. To evaluate classification performance, first the approximate posterior distribution over the DBMs top-level features was inferred for each image in the
evaluation set, followed by performing 1-nearest neighbor in this feature space using cosine similarity. To speed up learning of the DBM and HD models, the original images were down-sampled, so
that each image was represented by 28x28 pixels with greyscale values from [0,1]. To further reduce
overfitting and learn more about the 2D image topology, which is built in to some deep models like
convolution networks [19], the set of background characters was artificially enhanced by generating
slight image translations (+/- 3 pixels), rotations (+/- 5 degrees), and scales (0.9 to 1.1).
Hierarchical Deep Model (HD). A more elaborate Hierarchical Deep model is derived by composing hierarchical nonparametric Bayesian models with Deep Boltzmann Machines [26]. The HD
model learns a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine, which allows one to represent both a layered hierarchy of
increasingly abstract features and a tree-structured hierarchy of super-classes for sharing abstract
knowledge among related classes. Given a new test image, the approximate posterior over class
assignments can be quickly inferred, as detailed in [26].
Simple Strokes (SS). A much simpler variant of HBPL that infers rigid ?stroke-like? parts [16].
Nearest neighbor (NN). Raw images are directly compared using cosine similarity and 1-NN.
Results. Performance is summarized in Table 1. As predicted, people were skilled one-shot learners, with an average error rate of 4.5%.
HBPL achieved a similar error rate of 4.8%, which was significantly
better than the alternatives. The Affine model achieved an error rate
of 18.2% with the classification rule in Eq. 10 left, while performance was 31.8% error with Eq. 10 right. The deep learning models
performed at 34.8% and 38% error, although performance was much
lower without pre-training (68.3% and 72%). The Simple Strokes and
Nearest Neighbor models had the highest error rates.
3.2
Table 1: One-shot classifiers
Learner
Humans
HBPL
Affine
HD
DBM
SS
NN
Error rate
4.5%
4.8%
18.2 (31.8%)
34.8 (68.3%)
38 (72%)
62.5%
78.3%
One-shot generation of new examples
Not only can people classify new examples, they can generate new examples ? even from just one
image. While all generative classifiers can produce examples, it can be difficult to synthesize a range
of compelling new examples in their raw form, especially since many models generate only features
of raw stimuli (e.g, [5]). While DBMs [25] can generate realistic digits after training on thousands
of examples, how well do these and other models perform from just a single training image?
We ran another Mechanical Turk task to produce nine new examples of 50 randomly selected handwritten character images from the evaluation set. Three of these images are shown in the leftmost
column of Figure 6. After correctly answering comprehension questions, 18 participants in the USA
were asked to ?draw a new example? of 25 characters, resulting in nine examples per character.
To simulate drawings from nine different people, each of the models generated nine samples after
seeing exactly the same images people did, as described in Section SI-8 and shown in Figure 6.
Low-level image differences were minimized by re-rendering stroke trajectories in the same way for
the models and people. Since the HD model does not always produce well-articulated strokes, it
was not quantitatively analyzed, although there are clear qualitative differences between these and
the human produced images (Figure 6).
7
Example
People
HBPL
Affine
HD
Figure 6: Generating new
examples from just a single
?target? image (left). Each
grid shows nine new examples synthesized by people and the three computational models.
Visual Turing test. To compare the examples generated by people and the models, we ran a visual
Turing test using 50 new participants in the USA on Mechanical Turk. Participants were told that
they would see a target image and two grids of 9 images (Figure 6), where one grid was drawn
by people with their computer mice and the other grid was drawn by a computer program that
?simulates how people draw a new character.? Which grid is which? There were two conditions,
where the ?computer program? was either HBPL or the Affine model. Participants were quizzed
on their comprehension and then they saw 50 trials. Accuracy was revealed after each block of
10 trials. Also, a button to review the instructions was always accessible. Four participants who
reported technical difficulties were not analyzed.
Results. Participants who tried to label drawings from people vs. HBPL were only 56% percent correct, while those who tried to label people vs. the Affine model were 92% percent correct. A 2-way
Analysis of Variance showed a significant effect of condition (p < .001), but no significant effect of
block and no interaction. While both group means were significantly better than chance, a subject
analysis revealed only 2 of 21 participants were better than chance for people vs. HBPL, while 24
of 25 were significant for people vs. Affine. Likewise, 8 of 50 items were above chance for people
vs. HBPL, while 48 of 50 items were above chance for people vs. Affine. Since participants could
easily detect the overly consistent Affine model, it seems the difficulty participants had in detecting
HBPL?s exemplars was not due to task confusion. Interestingly, participants did not significantly
improve over the trials, even after seeing hundreds of images from the model. Our results suggest
that HBPL can generate compelling new examples that fool a majority of participants.
4
Discussion
Hierarchical Bayesian Program Learning (HBPL), by exploiting compositionality and causality, departs from standard models that need a lot more data to learn new concepts. From just one example,
HBPL can both classify and generate compelling new examples, fooling judges in a ?visual Turing
test? that other approaches could not pass. Beyond the differences in model architecture, HBPL was
also trained on the causal dynamics behind images, although just the images were available at evaluation time. If one were to incorporate this compositional and causal structure into a deep learning
model, it could lead to better performance on our tasks. Thus, we do not see our model as the final
word on how humans learn concepts, but rather, as a suggestion for the type of structure that best
captures how people learn rich concepts from very sparse data. Future directions will extend this
approach to other natural forms of generalization with characters, as well as speech, gesture, and
other domains where compositionality and causality are central.
Acknowledgments
We would like to thank MIT CoCoSci for helpful feedback. This work was supported by ARO MURI
contract W911NF-08-1-0242 and a NSF Graduate Research Fellowship held by the first author.
8
References
[1] M. K. Babcock and J. Freyd. Perception of dynamic information in static handwritten forms. American
Journal of Psychology, 101(1):111?130, 1988.
[2] I. Biederman. Recognition-by-components: a theory of human image understanding. Psychological
Review, 94(2):115?47, 1987.
[3] S. Carey and E. Bartlett. Acquiring a single new word. Papers and Reports on Child Language Development, 15:17?29, 1978.
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image
database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[5] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 28(4):594?611, 2006.
[6] J. Feldman. The structure of perceptual categories. Journal of Mathematical Psychology, 41:145?170,
1997.
[7] J. Freyd. Representing the dynamics of a static form. Memory and Cognition, 11(4):342?346, 1983.
[8] S. Geman, E. Bienenstock, and R. Doursat. Neural Networks and the Bias/Variance Dilemma. Neural
Computation, 4:1?58, 1992.
[9] E. Gilet, J. Diard, and P. Bessi`ere. Bayesian action-perception computational model: interaction of production and recognition of cursive letters. PloS ONE, 6(6), 2011.
[10] G. E. Hinton and V. Nair. Inferring motor programs from images of handwritten digits. In Advances in
Neural Information Processing Systems 19, 2006.
[11] K. H. James and I. Gauthier. Letter processing automatically recruits a sensory-motor brain network.
Neuropsychologia, 44(14):2937?2949, 2006.
[12] K. H. James and I. Gauthier. When writing impairs reading: letter perception?s susceptibility to motor
interference. Journal of Experimental Psychology: General, 138(3):416?31, Aug. 2009.
[13] C. Kemp and A. Jern. Abstraction and relational learning. In Advances in Neural Information Processing
Systems 22, 2009.
[14] A. Krizhevsky. Learning multiple layers of features from tiny images. PhD thesis, Unviersity of Toronto,
2009.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural
Networks. In Advances in Neural Information Processing Systems 25, 2012.
[16] B. M. Lake, R. Salakhutdinov, J. Gross, and J. B. Tenenbaum. One shot learning of simple visual concepts.
In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011.
[17] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Concept learning as motor program induction:
A large-scale empirical study. In Proceedings of the 34th Annual Conference of the Cognitive Science
Society, 2012.
[18] L. Lam, S.-W. Lee, and C. Y. Suen. Thinning Methodologies - A Comprehensive Survey. IEEE Transactions of Pattern Analysis and Machine Intelligence, 14(9):869?885, 1992.
[19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278?2323, 1998.
[20] K. Liu, Y. S. Huang, and C. Y. Suen. Identification of Fork Points on the Skeletons of Handwritten Chinese
Characters. IEEE Transactions of Pattern Analysis and Machine Intelligence, 21(10):1095?1100, 1999.
[21] M. Longcamp, J. L. Anton, M. Roth, and J. L. Velay. Visual presentation of single letters activates a
premotor area involved in writing. Neuroimage, 19(4):1492?1500, 2003.
[22] E. M. Markman. Categorization and Naming in Children. MIT Press, Cambridge, MA, 1989.
[23] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on
transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2000.
[24] M. Revow, C. K. I. Williams, and G. E. Hinton. Using Generative Models for Handwritten Digit Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):592?606, 1996.
[25] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann Machines. In 12th Internationcal Conference on
Artificial Intelligence and Statistics (AISTATS), 2009.
[26] R. Salakhutdinov, J. B. Tenenbaum, and A. Torralba. Learning with Hierarchical-Deep Models. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35(8):1958?71, 2013.
[27] P. H. Winston. Learning structural descriptions from examples. In P. H. Winston, editor, The Psychology
of Computer Vision. McGraw-Hill, New York, 1975.
[28] F. Xu and J. B. Tenenbaum. Word Learning as Bayesian Inference. Psychological Review, 114(2):245?
272, 2007.
9
| 5128 |@word trial:10 version:1 seems:2 open:3 instruction:2 tried:2 shot:15 initial:1 generatively:1 liu:1 score:4 zij:11 document:1 interestingly:1 current:1 com:1 si:19 yet:3 written:2 parsing:4 must:1 realistic:1 blur:2 shape:1 motor:11 designed:1 v:6 generative:3 selected:4 intelligence:6 item:2 plane:2 beginning:2 colored:1 segway:2 detecting:1 toronto:3 location:2 simpler:1 si1:1 five:4 mathematical:1 along:8 skilled:1 qualitative:1 consists:1 fitting:1 behavioral:1 inside:1 thinned:4 introduce:3 indeed:1 planning:6 brain:3 salakhutdinov:5 automatically:1 little:1 equipped:1 begin:3 classifies:1 provided:1 mass:1 what:1 substantially:3 recruit:1 developed:2 finding:1 transformation:6 sky:1 every:2 ti:5 tackle:1 exactly:2 classifier:3 scaled:1 hit:1 control:8 unit:1 t1:4 before:1 local:3 id:1 initiated:1 path:1 approximately:1 challenging:4 range:4 graduate:1 acknowledgment:1 lecun:1 practice:1 block:2 differs:1 digit:4 area:1 empirical:3 significantly:5 pre:3 word:3 seeing:2 suggest:1 get:1 layered:1 judged:2 impossible:1 writing:3 live:1 www:1 equivalent:1 deterministic:1 missing:1 center:2 roth:1 primitive:8 attention:1 starting:3 williams:1 survey:1 amazon:1 rule:7 fill:1 hd:6 classic:1 coordinate:2 enhanced:2 hierarchy:2 target:2 exact:1 us:1 hypothesis:1 element:3 synthesize:1 recognition:9 muri:1 database:1 geman:1 invented:1 bottom:4 fork:1 capture:1 parameterize:1 thousand:3 wj:1 richness:1 plo:1 movement:2 highest:1 ran:2 gross:1 complexity:2 ui:5 skeleton:1 asked:2 dynamic:3 trained:3 depend:1 predictive:1 dilemma:2 learner:2 easily:1 joint:3 represented:2 alphabet:11 train:11 separated:1 derivation:1 fast:1 describe:1 articulated:1 artificial:3 congealing:1 premotor:1 larger:2 solve:1 plausible:1 valued:1 drawing:5 s:2 cvpr:1 ability:2 statistic:2 noisy:1 final:1 sequence:1 pressing:1 reconstruction:1 aro:1 interaction:2 lam:1 drawer:2 combining:1 flexibility:1 description:3 qni:1 exploiting:1 sutskever:1 r1:5 produce:6 generating:9 categorization:1 object:3 nui:1 exemplar:2 nearest:3 ij:2 received:2 progress:1 aug:1 eq:10 c:1 involves:1 come:1 predicted:1 judge:1 direction:1 greek:2 foot:1 closely:1 guided:1 correct:2 filter:1 stochastic:1 human:13 dbms:3 require:3 generalization:4 elementary:1 comprehension:2 yij:7 extension:1 exploring:1 effortlessly:1 around:3 cognition:1 dbm:2 torralba:1 susceptibility:1 purpose:1 ruslan:1 combinatorial:1 label:2 saw:1 ere:1 reflects:1 hope:1 mit:6 suen:2 gaussian:3 activates:1 super:1 always:2 rather:3 theorized:1 derived:1 bernoulli:1 likelihood:1 detect:1 helpful:1 inference:6 abstraction:1 rigid:1 foreign:1 nn:3 typically:1 hidden:2 relation:13 perona:1 bienenstock:1 transformed:1 i1:1 pixel:6 x11:4 classification:18 flexible:1 issue:1 denoted:1 among:1 development:1 heatmap:1 spatial:3 special:1 constrained:2 equal:1 construct:1 never:1 having:1 manually:1 identical:1 represents:1 jern:1 markman:1 future:1 minimized:1 report:1 t2:2 quantitatively:1 stimulus:3 spline:8 few:3 distinguishes:1 randomly:3 composed:1 gamma:1 comprehensive:1 cognitively:1 argmax:4 connects:1 evaluation:7 introduces:1 analyzed:3 behind:1 held:1 chain:1 encourage:1 partial:1 experience:1 unless:1 filled:1 tree:1 walk:1 re:8 circle:5 causal:5 theoretical:1 psychological:3 classify:3 column:3 earlier:1 compelling:3 w911nf:1 assignment:1 introducing:1 deviation:1 subset:1 uniform:1 hundred:4 krizhevsky:2 reported:1 answer:1 perturbed:1 person:2 density:1 accessible:1 probabilistic:1 told:1 contract:1 dong:1 lee:1 synthesis:1 quickly:1 mouse:1 thesis:1 central:2 huang:1 cognitive:4 american:1 li:11 suggesting:1 summarized:1 includes:1 z12:1 depends:2 vi:2 script:2 performed:2 break:1 lot:1 red:1 spectacularly:1 competitive:1 participant:15 start:8 carey:1 contribution:1 ni:1 accuracy:2 convolutional:1 variance:6 who:3 likewise:1 miller:1 generalize:1 anton:1 bayesian:10 raw:9 handwritten:9 identification:1 produced:5 trajectory:10 stroke:45 explain:3 reach:2 sharing:2 inexpensive:1 frequency:2 steadily:1 turk:4 james:2 involved:1 associated:1 static:2 sampled:10 stop:1 dataset:3 pilot:1 color:1 knowledge:2 dimensionality:1 infers:1 routine:1 sophisticated:1 actually:1 thinning:2 methodology:1 improved:3 evaluated:4 just:19 until:2 hand:1 hastings:1 parse:7 gauthier:2 google:1 usa:3 effect:2 concept:17 y2:2 counterpart:2 y12:2 jbt:1 game:1 during:4 cosine:2 leftmost:1 hill:1 confusion:1 performs:1 l1:2 percent:2 image:74 hallmark:1 novel:1 common:2 rotation:1 pseudocode:1 multinomial:2 ji:5 extend:1 slight:1 interpret:1 synthesized:1 significant:3 cambridge:1 feldman:1 rd:1 grid:6 similarly:1 omniglot:4 language:2 had:2 specification:1 similarity:2 etc:2 add:1 posterior:6 showed:1 optimizes:1 reverse:1 outperforming:1 binary:5 s11:2 yi:4 joshua:1 muscle:1 minimum:1 x105:2 deng:1 freely:1 forty:1 ii:2 relates:1 multiple:2 full:2 desirable:1 z11:2 technical:1 x28:1 offer:2 cross:1 cifar:1 gesture:1 dept:3 naming:1 coded:1 paired:2 neuro:1 y21:4 variant:1 vision:3 psychologically:1 represent:1 achieved:3 cell:1 addition:1 background:7 fellowship:1 doursat:1 unlike:2 sini:1 suspect:1 subject:1 undirected:1 simulates:1 legend:1 spirit:1 seem:1 neuropsychologia:1 structural:4 near:1 latin:1 split:1 enough:2 iii:2 rendering:1 variety:1 revealed:2 fit:2 zi:4 psychology:4 architecture:2 competing:1 click:1 topology:1 reduce:1 regarding:1 haffner:1 tradeoff:2 translates:1 airplane:1 six:1 bartlett:1 impairs:1 speech:1 york:1 nine:5 compositional:3 action:1 deep:15 generally:1 detailed:3 clear:1 fool:1 cursive:1 amount:1 nonparametric:1 tenenbaum:5 processed:1 category:2 generate:6 specifies:1 freyd:2 xij:7 canonical:1 nsf:1 shifted:1 dotted:1 estimated:2 overly:1 per:3 track:1 correctly:1 sidestepping:1 discrete:2 write:1 express:1 group:1 four:4 traced:2 drawn:6 r10:2 imaging:1 graph:3 button:1 fooling:1 run:3 turing:5 letter:4 you:1 decide:1 lake:3 utilizes:1 draw:3 scaling:1 layer:2 completing:1 followed:1 tackled:1 winston:2 annual:2 activity:1 adapted:1 handful:1 fei:4 scene:1 ri:18 x2:2 speed:1 argument:1 simulate:1 performing:1 rendered:1 x12:2 structured:1 tv:1 remain:1 across:2 smaller:1 character:37 increasingly:1 wi:7 metropolis:1 rsalakhu:1 making:1 s1:4 wherever:1 explained:1 intuitively:1 sij:2 interference:1 previously:1 needed:2 end:6 studying:1 available:1 hierarchical:12 alternative:2 matsakis:1 original:1 top:7 dirichlet:1 include:4 ensure:1 x21:4 completed:2 exploit:1 hierarchial:1 build:1 especially:1 chinese:1 society:2 ink:1 question:2 added:1 flipping:1 gradient:3 distance:1 separate:1 thank:1 majority:1 collected:1 kemp:1 bengio:1 reason:1 induction:1 unviersity:1 hdp:1 modeled:3 index:2 illustration:2 acquire:1 hebrew:1 difficult:2 korean:1 greyscale:1 y11:4 proper:1 boltzmann:6 unknown:1 refit:2 perform:1 convolution:2 datasets:2 markov:1 benchmark:2 supporting:1 truncated:2 viola:1 zi1:1 variability:4 hinton:4 relational:1 frame:2 y1:1 brenden:2 biederman:1 inferred:2 compositionality:5 tive:1 inverting:1 tui:2 mechanical:4 specified:1 cleaned:4 z1:1 imagenet:3 namely:1 optimized:1 learned:6 address:1 beyond:1 below:3 perception:4 beating:1 pattern:7 reading:1 challenge:2 program:11 built:1 including:4 max:2 video:1 memory:1 critical:1 natural:6 difficulty:2 pause:1 representing:1 improve:1 movie:1 brief:1 library:3 attachment:1 prior:1 review:3 l2:2 understanding:1 marginalizing:1 relative:1 embedded:1 par:8 highlight:2 generation:1 suggestion:1 proportional:1 remarkable:1 validation:1 degree:1 affine:10 consistent:1 principle:2 editor:1 tiny:1 translation:3 row:4 production:1 token:17 placed:2 supported:1 transpose:1 bias:2 side:4 deeper:1 allow:1 warp:1 wide:2 neighbor:3 sparse:2 distributed:1 benefit:1 curve:1 feedback:2 evaluating:1 rich:1 sensory:1 author:1 far:1 transaction:5 approximate:6 mcgraw:1 global:3 overfitting:1 conceptual:1 discriminative:1 xi:4 fergus:1 grayscale:3 search:4 pen:6 continuous:1 triplet:1 table:2 additionally:1 promising:1 learn:11 robust:1 composing:1 bottou:1 complex:3 artificially:1 domain:3 did:2 aistats:1 terminated:1 big:1 noise:4 hyperparameters:4 repeated:1 child:2 x1:1 xu:1 causality:5 elaborate:1 cubic:1 sub:11 position:7 inferring:1 neuroimage:1 candidate:1 perceptual:2 answering:1 third:1 learns:2 down:2 departs:1 showing:1 r2:6 incorporating:1 socher:1 mnist:3 lifting:1 phd:1 generalizing:1 visual:12 acquiring:1 babcock:1 chance:4 ma:1 nair:1 succeed:1 conditional:3 presentation:1 shared:2 z21:2 revow:1 change:1 included:1 determined:1 except:1 uniformly:4 typical:1 called:3 total:1 pas:1 experimental:1 people:32 support:2 incorporate:1 evaluate:3 mcmc:1 tested:4 |
4,563 | 5,129 | Stochastic Majorization-Minimization Algorithms
for Large-Scale Optimization
Julien Mairal
LEAR Project-Team - INRIA Grenoble
[email protected]
Abstract
Majorization-minimization algorithms consist of iteratively minimizing a majorizing surrogate of an objective function. Because of its simplicity and its wide
applicability, this principle has been very popular in statistics and in signal processing. In this paper, we intend to make this principle scalable. We introduce
a stochastic majorization-minimization scheme which is able to deal with largescale or possibly infinite data sets. When applied to convex optimization problems
under suitable
? assumptions, we show that it achieves an expected convergence
rate of O(1/ n) after n iterations, and of O(1/n) for strongly convex functions.
Equally important, our scheme almost surely converges to stationary points for
a large class of non-convex problems. We develop several efficient algorithms
based on our framework. First, we propose a new stochastic proximal gradient
method, which experimentally matches state-of-the-art solvers for large-scale ?1 logistic regression. Second, we develop an online DC programming algorithm for
non-convex sparse estimation. Finally, we demonstrate the effectiveness of our
approach for solving large-scale structured matrix factorization problems.
1
Introduction
Majorization-minimization [15] is a simple optimization principle for minimizing an objective function. It consists of iteratively minimizing a surrogate that upper-bounds the objective, thus monotonically driving the objective function value downhill. This idea is used in many existing procedures.
For instance, the expectation-maximization (EM) algorithm (see [5, 21]) builds a surrogate for a
likelihood model by using Jensen?s inequality. Other approaches can also be interpreted under the
majorization-minimization point of view, such as DC programming [8], where ?DC? stands for difference of convex functions, variational Bayes techniques [28], or proximal algorithms [1, 23, 29].
In this paper, we propose a stochastic majorization-minimization algorithm, which is is suitable for
solving large-scale problems arising in machine learning and signal processing. More precisely, we
address the minimization of an expected cost?that is, an objective function that can be represented
by an expectation over a data distribution. For such objectives, online techniques based on stochastic
approximations have proven to be particularly efficient, and have drawn a lot of attraction in machine
learning, statistics, and optimization [3?6, 9?12, 14, 16, 17, 19, 22, 24?26, 30].
Our scheme follows this line of research. It consists of iteratively building a surrogate of the expected
cost when only a single data point is observed at each iteration; this data point is used to update the
surrogate, which in turn is minimized to obtain a new estimate. Some previous works are closely
related to this scheme: the online EM algorithm for latent data models [5, 21] and the online matrix
factorization technique of [19] involve for instance surrogate functions updated in a similar fashion.
Compared to these two approaches, our method is targeted to more general optimization problems.
Another related work is the incremental majorization-minimization algorithm of [18] for finite training sets; it was indeed shown to be efficient for solving machine learning problems where storing
1
dense information about the past iterates can be afforded. Concretely, this incremental scheme requires to store O(pn) values, where p is the variable size, and n is the size of the training set.1
This issue was the main motivation for us for proposing a stochastic scheme with a memory load
independent of n, thus allowing us to possibly deal with infinite data sets, or a huge variable size p.
We study the convergence properties of our algorithm when the surrogates are strongly convex and
chosen among the class of first-order surrogate functions introduced in [18], which consist of approximating the possibly non-smooth objective up to a smooth error. When the objective is convex,
we obtain expected convergence rates that are asymptotically
optimal, or close to optimal [14, 22].
?
More precisely, the convergence rate is of order O(1/ n) in a finite horizon setting, and O(1/n) for
a strongly convex objective in an infinite horizon setting. Our second analysis shows that for nonconvex problems, our method almost surely converges to a set of stationary points under suitable
assumptions. We believe that this result is equally valuable as convergence rates for convex optimization. To the best of our knowledge, the literature on stochastic non-convex optimization is rather
scarce, and we are only aware of convergence results in more restricted settings than ours?see for
instance [3] for the stochastic gradient descent algorithm, [5] for online EM, [19] for online matrix
factorization, or [9], which provides stronger guarantees, but for unconstrained smooth problems.
We develop several efficient algorithms based on our framework. The first one is a new stochastic
proximal gradient method for composite or constrained optimization. This algorithm is related to a
long series of work in the convex optimization literature [6, 10, 12, 14, 16, 22, 25, 30], and we demonstrate that it performs as well as state-of-the-art solvers for large-scale ?1 -logistic regression [7]. The
second one is an online DC programming technique, which we demonstrate to be better than batch
alternatives for large-scale non-convex sparse estimation [8]. Finally, we show that our scheme can
address efficiently structured sparse matrix factorization problems in an online fashion, and offers
new possibilities to [13, 19] such as the use of various loss or regularization functions.
This paper is organized as follows: Section 2 introduces first-order surrogate functions for batch
optimization; Section 3 is devoted to our stochastic approach and its convergence analysis; Section 4
presents several applications and numerical experiments, and Section 5 concludes the paper.
2
Optimization with First-Order Surrogate Functions
Throughout the paper, we are interested in the minimization of a continuous function f : Rp ? R:
min f (?),
(1)
???
where ? ? Rp is a convex set. The majorization-minimization principle consists of computing a majorizing surrogate gn of f at iteration n and updating the current estimate by ?n ? arg min??? gn (?).
The success of such a scheme depends on how well the surrogates approximate f . In this paper, we
consider a particular class of surrogate functions introduced in [18] and defined as follows:
Definition 2.1 (Strongly Convex First-Order Surrogate Functions).
Let ? be in ?. We denote by SL,? (f, ?) the set of ?-strongly convex functions g such that g ? f ,
g(?) = f (?), the approximation error g ? f is differentiable, and the gradient ?(g ? f ) is LLipschitz continuous. We call the functions g in SL,? (f, ?) ?first-order surrogate functions?.
Among the first-order surrogate functions presented in [18], we should mention the following ones:
? Lipschitz Gradient Surrogates.
When f is differentiable and ?f is L-Lipschitz, f admits the following surrogate g in S2L,L (f, ?):
g : ? 7? f (?) + ?f (?)? (? ? ?) +
L
k? ? ?k22 .
2
When f is convex, g is in SL,L (f, ?), and when f is ?-strongly convex, g is in SL??,L (f, ?).
Minimizing g amounts to performing a classical classical gradient descent step ? ? ? ? L1 ?f (?).
? Proximal Gradient Surrogates.
Assume that f splits into f = f1 + f2 , where f1 is differentiable, ?f1 is L-Lipschitz, and f2 is
1
To alleviate this issue, it is possible to cut the dataset into ? mini-batches, reducing the memory load to
O(p?), which remains cumbersome when p is very large.
2
convex. Then, the function g below is in S2L,L (f, ?):
L
k? ? ?k22 + f2 (?).
2
When f1 is convex, g is in SL,L (f, ?). If f1 is ?-strongly convex, g is in SL??,L (f, ?). Minimizing g
amounts to a proximal gradient step [1, 23, 29]: ? ? arg min? 21 k? ? L1 ?f1 (?) ? ?k22 + L1 f2 (?).
g : ? 7? f1 (?) + ?f1 (?)? (? ? ?) +
? DC Programming Surrogates.
Assume that f = f1 + f2 , where f2 is concave and differentiable, ?f2 is L2 -Lipschitz, and g1 is in
SL1 ,?1 (f1 , ?), Then, the following function g is a surrogate in SL1 +L2 ,?1 (f, ?):
g : ? 7? f1 (?) + f2 (?) + ?f2 (?)? (? ? ?).
When f1 is convex, f1 + f2 is a difference of convex functions, leading to a DC program [8].
With the definition of first-order surrogates and a basic ?batch? algorithm in hand, we now introduce
our main contribution: a stochastic scheme for solving large-scale problems.
3
Stochastic Optimization
As pointed out in [4], one is usually not interested in the minimization of an empirical cost on a
finite training set, but instead in minimizing an expected cost. Thus, we assume from now on that f
has the form of an expectation:
h
i
(2)
min f (?) , Ex [?(x, ?)] ,
???
where x from some set X represents a data point, which is drawn according to some unknown
distribution, and ? is a continuous loss function. As often done in the literature [22], we assume that
the expectations are well defined and finite valued; we also assume that f is bounded below.
We present our approach for tackling (2) in Algorithm 1. At each iteration, we draw a training
point xn , assuming that these points are i.i.d. samples from the data distribution. Note that in
practice, since it is often difficult to obtain true i.i.d. samples, the points xn are computed by
cycling on a randomly permuted training set [4]. Then, we choose a surrogate gn for the function
? 7? ?(xn , ?), and we use it to update a function g?n that behaves as an approximate surrogate for the
expected cost f . The function g?n is in fact a weighted average of previously computed surrogates,
and involves a sequence of weights (wn )n?1 that will be discussed later. Then, we minimize g?n , and
obtain a new estimate ?n . For convex problems, we also propose to use averaging schemes, denoted
by ?option 2? and ?option 3? in Alg. 1. Averaging is a classical technique for improving convergence
rates in convex optimization [10, 22] for reasons that are clear in the convergence proofs.
Algorithm 1 Stochastic Majorization-Minimization Scheme
input ?0 ? ? (initial estimate); N (number of iterations); (wn )n?1 , weights in (0, 1];
1: initialize the approximate surrogate: g?0 : ? 7? ?2 k? ? ?0 k22 ; ??0 = ?0 ; ??0 = ?0 ;
2: for n = 1, . . . , N do
3:
draw a training point xn ; define fn : ? 7? ?(xn , ?);
4:
choose a surrogate function gn in SL,? (fn , ?n?1 );
5:
update the approximate surrogate: g?n = (1 ? wn )?
gn?1 + wn gn ;
6:
update the current estimate:
?n ? arg min g?n (?);
???
for option 2, update the averaged iterate: ??n , (1 ? wn+1 )??n?1 + wn+1 ?n ;
)??n?1 +wn+1 ?n
Pn+1
8:
for option 3, update the averaged iterate: ??n , (1?wn+1
;
k=1 wk
9: end for
output (option 1): ?N (current estimate, no averaging);
output (option 2): ??N (first averaging scheme);
output (option 3): ??N (second averaging scheme).
7:
We remark that Algorithm 1 is only practical when the functions g?n can be parameterized with a
small number of variables, and when they can be easily minimized over ?. Concrete examples are
discussed in Section 4. Before that, we proceed with the convergence analysis.
3
3.1
Convergence Analysis - Convex Case
First, We study the case of convex functions fn : ? 7? ?(?, xn ), and make the following assumption:
(A) for all ? in ?, the functions fn are R-Lipschitz continuous. Note that for convex functions,
this is equivalent to saying that subgradients of fn are uniformly bounded by R.
Assumption (A) is classical in the stochastic optimization literature [22]. Our first result shows that
with the averaging scheme corresponding to ?option 2? in Alg. 1, we obtain an expected convergence
rate that makes explicit the role of the weight sequence (wn )n?1 .
Proposition 3.1 (Convergence Rate).
When the functions fn are convex, under assumption (A), and when ? = L, we have
2 Pn
2
Lk?? ? ?0 k22 + RL
k=1 wk
?
?
P
for all n ? 1,
(3)
E[f (?n?1 ) ? f ] ?
n
2 k=1 wk
where ??n?1 is defined in Algorithm 1, ?? is a minimizer of f on ?, and f ? , f (?? ).
Such a rate is similar to the one of stochastic gradient descent with averaging, see [22] for example.
Note that the constraint ? = L here is compatible with the proximal gradient surrogate.
?
From Proposition 3.1, it is easy to obtain a O(1/ n) bound for a finite horizon?that is, when the
total number of iterations n is known in advance.
? When n is fixed, such a bound can indeed be
obtained by ?
plugging constant weights wk = ?/ n for all k ? n in Eq. (3). Note that the upperbound O(1/ n) cannot be improved in general without making further assumptions on the objective
function [22]. The next corollary shows that in an infinite horizon setting and with decreasing
?
weights, we lose a logarithmic factor compared to an optimal convergence rate [14,22] of O(1/ n).
Corollary 3.1 (Convergence Rate - Infinite Horizon - Decreasing Weights).
?
Let us make the same assumptions as in Proposition 3.1 and choose the weights wn = ?/ n. Then,
Lk?? ? ?0 k22
R2 ?(1 + log(n))
?
?
E[f (??n?1 ) ? f ? ] ?
+
, ?n ? 2.
2? n
2L n
?
Our analysis
suggests
?
? to use weights of the form O(1/ n). In practice, we have found that choosing
wn = n0 + 1/ n0 + n performs well, where n0 is tuned on a subsample of the training set.
3.2
Convergence Analysis - Strongly Convex Case
In this section, we introduce an additional assumption:
(B) the functions fn are ?-strongly convex.
We show that our method achieves a rate O(1/n), which is optimal up to a multiplicative constant
for strongly convex functions (see [14, 22]).
Proposition 3.2 (Convergence Rate).
1+?
Under assumptions (A) and (B), with ? = L + ?. Define ? , ?? and wn , 1+?n
. Then,
2
1
2R
?
, ?k?? ? ?0 k22
for all n ? 1,
E[f (??n?1 ) ? f ? ] + E[k?? ? ?n k22 ] ? max
2
?
?n + 1
where ??n is defined in Algorithm 1, when choosing the averaging scheme called ?option 3?.
The averaging scheme is slightly different than in the previous section and the weights decrease
at a different speed. Again, this rate applies to the proximal gradient surrogates, which satisfy the
constraint ? = L + ?. In the next section, we analyze our scheme in a non-convex setting.
3.3
Convergence Analysis - Non-Convex Case
Convergence results for non-convex problems are by nature weak, and difficult to obtain for stochastic optimization [4, 9]. In such a context, proving convergence to a global (or local) minimum is out
of reach, and classical analyses study instead asymptotic stationary point conditions, which involve
directional derivatives (see [2, 18]). Concretely, we introduce the following assumptions:
4
(C) ? and the support X of the data are compact;
(D) The functions fn are uniformly bounded by some constant M ;
P
P
?
(E) The weights wn are non-increasing, w1 = 1, n?1 wn = +?, and n?1 wn2 n < +?;
(F) The directional derivatives ?fn (?, ?? ? ?), and ?f (?, ?? ? ?) exist for all ? and ?? in ?.
Assumptions (C) and (D) combined with (A) are useful because they allow us to use some uniform
convergence results from the theory of empirical processes [27]. In a nutshell, these assumptions
ensure that the function class {x 7? ?(x, ?) : ? ? ?} is ?simple enough?, such that a uniform law
of large numbers applies. The assumption (E) is more technical: it resembles classical conditions
used for proving the convergence of stochastic gradient descent algorithms, usually stating that the
2
weights wnPshould be
? the summand of a diverging sum while the sum of wn should be finite; the
2
constraint n?1 wn n < +? is slightly stronger. Finally, (F) is a mild assumption, which is
useful to characterize the stationary points of the problem. A classical necessary first-order condition [2] for ? to be a local minimum of f is indeed to have ?f (?, ?? ? ?) non-negative for all ?? in ?.
We call such points ? the stationary points of the function f . The next proposition is a generalization
of a convergence result obtained in [19] in the context of sparse matrix factorization.
Proposition 3.3 (Non-Convex Analysis - Almost Sure Convergence).
Under assumptions (A), (C), (D), (E), (f (?n ))n?0 converges with probability one. Under assumption (F), we also have that
?f?n (?n , ? ? ?n )
lim inf inf
? 0,
n?+? ???
k? ? ?n k2
where the function f?n is a weighted empirical risk recursively defined as f?n = (1?wn )f?n?1 +wn fn .
It can be shown that f?n uniformly converges to f .
Even though f?n converges uniformly to the expected cost f , Proposition 3.3 does not imply that the
limit points of (?n )n?1 are stationary points of f . We obtain such a guarantee when the surrogates
that are parameterized, an assumption always satisfied when Algorithm 1 is used in practice.
Proposition 3.4 (Non-Convex Analysis - Parameterized Surrogates).
Let us make the same assumptions as in Proposition 3.3, and let us assume that the functions g?n are
parameterized by some variables ?n living in a compact set K of Rd . In other words, g?n can be
written as g?n , with ?n in K. Suppose there exists a constant K > 0 such that |g? (?) ? g?? (?)| ?
Kk? ? ?? k2 for all ? in ? and ?, ?? in K. Then, every limit point ?? of the sequence (?n )n?1 is a
stationary point of f ?that is, for all ? in ?,
?f (?? , ? ? ?? ) ? 0.
Finally, we show that our non-convex convergence analysis can be extended beyond first-order surrogate functions?that is, when gn does not satisfy exactly Definition 2.1. This is possible when
the objective has a particular partially separable structure, as shown in the next proposition. This
extension was motivated by the non-convex sparse estimation formulation of Section 4, where such
a structure appears.
Proposition 3.5 (Non-Convex Analysis - Partially Separable Extension).
PK
Assume that the functions fn split into fn (?) = f0,n (?) + k=1 fk,n (?k (?)), where the functions
p
?k : R ? R are convex and R-Lipschitz, and the fk,n are non-decreasing for k ? 1. Consider gn,0
in SL0 ,?1 (f0,n , ?n?1 ), and some non-decreasing functions gk,n in SLk ,0 (fk,n , ?k (?n?1 )). Instead
of choosing gn in SL,? (fn , ?n?1 ) in Alg 1, replace it by gn , ? 7? g0,n (?)+gk,n (?k (?)).
Then, Propositions 3.3 and 3.4 still hold.
4
Applications and Experimental Validation
In this section, we introduce different applications, and provide numerical experiments. A
C++/Matlab implementation is available in the software package SPAMS [19].2 All experiments
were performed on a single core of a 2GHz Intel CPU with 64GB of RAM.
2
http://spams-devel.gforge.inria.fr/.
5
4.1
Stochastic Proximal Gradient Descent Algorithm
Our first application is a stochastic proximal gradient descent method, which we call SMM (Stochastic Majorization-Minimization), for solving problems of the form:
min Ex [?(x, ?)] + ?(?),
(4)
???
where ? is a convex deterministic regularization function, and the functions ? 7? ?(x, ?) are differentiable and their gradients are L-Lipschitz continuous. We can thus use the proximal gradient
surrogate presented in Section 2. Assume that a weight sequence (wn )n?1 is chosen such that
w1 = 1. By defining some other weights wni recursively as wni , (1 ? wn )wni?1 for i < n and
wnn , wn , our scheme yields the update rule:
n
X
?n ? arg min
wni ?fi (?i?1 )? ? + L2 k? ? ?i?1 k22 + ?(?) .
(SMM)
???
i=1
Our algorithm is related to FOBOS [6], to SMIDAS [25] or the truncated gradient method [16]
(when ? is the ?1 -norm). These three algorithms use indeed the following update rule:
(FOBOS)
?n ? arg min ?fn (?n?1 )? ? + 2?1n k? ? ?n?1 k22 + ?(?),
???
Another related scheme is the regularized dual averaging (RDA) of [30], which can be written as
n
1X
(RDA)
?fi (?i?1 )? ? + 2?1n k?k22 + ?(?).
?n ? arg min
n i=1
???
Compared to these approaches, our scheme includes a weighted average of previously seen gradients, and a weighted average of the past iterates. Some links can also be drawn with approaches
such as the ?approximate follow the leader? algorithm of [10] and other works [12, 14].
We now evaluate the performance of our method for ?1 -logistic regression. In summary, the datasets
p
consist of pairs (yi , xi )N
i=1 , where the yi ?s are in {?1, +1}, and the xi ?s are in R with unit ?2 norm. The function ? in (4) is the ?1 -norm: ?(?) , ?k?k1 , and ? is a regularization parameter;
?
the functions fi are logistic losses: fi (?) , log(1 + e?yi xi ? ). One part of each
p dataset is devoted
to training, and another part to testing. We used weights of the form wn , (n0 + 1)/(n + n0 ),
where n0 is automatically adjusted at the beginning of each experiment by performing one pass on
5% of the training data. We implemented SMM in C++ and exploited the sparseness of the datasets,
such that each update has a computational complexity of the order O(s), where s is the number of
non zeros in ?fn (?n?1 ); such an implementation is non trivial but proved to be very efficient.
We consider three datasets described in the table below. rcv1 and webspam are obtained from the
2008 Pascal large-scale learning challenge.3 kdd2010 is available from the LIBSVM website.4
name
rcv1
webspam
kdd2010
Ntr (train)
781 265
250 000
10 000 000
Nte (test)
23 149
100 000
9 264 097
p
47 152
16 091 143
28 875 157
density (%)
0.161
0.023
10?4
size (GB)
0.95
14.95
4.8
We compare our implementation with state-of-the-art publicly available solvers: the batch algorithm
FISTA of [1] implemented in the C++ SPAMS toolbox and LIBLINEAR v1.93 [7]. LIBLINEAR
is based on a working-set algorithm, and, to the best of our knowledge, is one of the most efficient
available solver for ?1 -logistic regression with sparse datasets. Because p is large, the incremental
majorization-minimization method of [18] could not run for memory reasons. We run every method
on 1, 2, 3, 4, 5, 10 and 25 epochs (passes over the training set), for three regularization regimes,
respectively yielding a solution with approximately 100, 1 000 and 10 000 non-zero coefficients.
We report results for the medium regularization in Figure 1 and provide the rest as supplemental
material. FISTA is not represented in this figure since it required more than 25 epochs to achieve
reasonable values. Our conclusion is that SMM often provides a reasonable solution after one epoch,
and outperforms LIBLINEAR in the low-precision regime. For high-precision regimes, LIBLINEAR
should be preferred. Such a conclusion is often obtained when comparing batch and stochastic
algorithms [4], but matching the performance of LIBLINEAR is very challenging.
3
4
http://largescale.ml.tu-berlin.de.
http://www.csie.ntu.edu.tw/?cjlin/libsvm/.
6
0.25
10
15
20
25
0.25
100
LIBLINEAR
SMM
0.2
0.15
0.1
0.05
0
5
10
15
20
25
0.1
0.05
0
10
15
20
25
Objective on Training Set
Objective on Training Set
LIBLINEAR
SMM
5
0.3
0.25
0
5
0.15
0.1
0.05
101
102
LIBLINEAR
SMM
0.2
0.1
0.05
Epochs / Dataset kddb
0
101
102
103
15
20
25
LIBLINEAR
SMM
0.2
0.15
0.1
0.05
103
0.15
10
Epochs / Dataset rcv1
0
5
10
15
20
25
Epochs / Dataset webspam
Computation Time (sec) / Dataset webspam
0.15
0
LIBLINEAR
SMM
0.2
Epochs / Dataset webspam
0.2
102
LIBLINEAR
SMM
0.35
Computation Time (sec) / Dataset rcv1
Objective on Training Set
Objective on Training Set
Epochs / Dataset rcv1
101
Objective on Testing Set
5
0.3
Objective on Testing Set
0
LIBLINEAR
SMM
0.35
Objective on Testing Set
0.3
Objective on Training Set
Objective on Training Set
LIBLINEAR
SMM
0.35
LIBLINEAR
SMM
0.2
0.15
0.1
0.05
0
0
5
10
15
20
25
Epochs / Dataset kddb
Computation Time (sec) / Dataset kddb
Figure 1: Comparison between LIBLINEAR and SMM for the medium regularization regime.
4.2
Online DC Programming for Non-Convex Sparse Estimation
We now consider the same
Pp experimental setting as in the previous section, but with a non-convex
regularizer ? : ? 7? ? j=1 log(|?[j]| + ?), where ?[j] is the j-th entry in ?. A classical way for
PN
minimizing the regularized empirical cost N1 i=1 fi (?) + ?(?) is to resort to DC programming. It
consists of solving a sequence of reweighted-?1 problems [8]. A current estimate ?n?1 is updated
PN
Pp
as a solution of min??? N1 i=1 fi (?) + ? j=1 ?j |?[j]|, where ?j , 1/(|?n?1 [j]| + ?).
In contrast to this ?batch? methodology, we can use our framework to address the problem online.
At iteration n of Algorithm 1, we define the function gn according to Proposition 3.5:
Pp
|?[j]|
gn : ? 7? fn (?n?1 ) + ?fn (?n?1 )? (? ? ?n?1 ) + L2 k? ? ?n?1 k22 + ? j=1 |?n?1
[j]|+? ,
-0.02
0.02
Online DC
Batch DC
0.01
0
-0.02
0
5
10
15
20
Iterations - Epochs / Dataset rcv1
25
-0.04
0
5
10
15
20
25
Iterations - Epochs / Dataset rcv1
-4.37
Online DC
Batch DC
-4.375
-4.53
-4.535
-0.03
-0.06
Online DC
Batch DC
-4.525
-0.01
-0.04
-4.52
Objective on Test Set
Online DC
Batch DC
0
Objective on Train Set
Objective on Test Set
Objective on Train Set
We compare our online DC programming algorithm against the batch one, and report the results in
Figure 2, with ? set to 0.01. We conclude that the batch reweighted-?1 algorithm always converges
after 2 or 3 weight updates, but suffers from local minima issues. The stochastic algorithm exhibits
a slower convergence, but provides significantly better solutions. Whether or not there are good
theoretical reasons for this fact remains to be investigated. Note that it would have been more
rigorous to choose a bounded set ?, which is required by Proposition 3.5. In practice, it turns not to
be necessary for our method to work well; the iterates ?n have indeed remained in a bounded set.
-4.38
-4.385
-4.54
0
5
10
15
20
Iterations - Epochs / Dataset webspam
25
0
5
10
15
20
25
Iterations - Epochs / Dataset webspam
Figure 2: Comparison between batch and online DC programming, with medium regularization for
the datasets rcv1 and webspam. Additional plots are provided in the supplemental material. Note
that each iteration in the batch setting can perform several epochs (passes over training data).
4.3
Online Structured Sparse Coding
In this section, we show that we can bring new functionalities to existing matrix factorization techm
niques [13, 19]. We are given a large collection of signals (xi )N
i=1 in R , and we want to find a
7
dictionary D in Rm?K that can represent these signals in a sparse way. The quality of D is measured through the loss ?(x, D) , min??RK 12 kx ? D?k22 + ?1 k?k1 + ?22 k?k22 , where the ?1 -norm
can be replaced by any convex regularizer, and the squared loss by any convex smooth loss.
Then, we are interested in minimizing the following expected cost:
min
D?Rm?K
Ex [?(x, D)] + ?(D),
where ? is a regularizer for D. In the online learning approach of [19], the only way to regularize D
is to use a constraint set, on which we need to be able to project efficiently; this is unfortunately not
always possible. In the matrix factorization framework of [13], it is argued that some applications
can benefit from a structured penalty ?, but the approach of [13] is not easily amenable to stochastic
optimization. Our approach makes it possible by using the proximal gradient surrogate
gn : D 7? ?(xn , Dn?1 ) + Tr ?D ?(xn , Dn?1 )? (D ? Dn?1 ) + L2 kD ? Dn?1 k2F + ?(D). (5)
It is indeed possible to show that D 7? ?(xn , D) is differentiable, and its gradient is Lipschitz
continuous with a constant L that can be explicitly computed [18, 19].
We now design a proof-of-concept experiment. We consider a set of N = 400 000 whitened natural
image patches xn of size m = 20 ? 20 pixels. We visualize some elements from a dictionary D
trained by SPAMS [19] on the left of Figure 3; the dictionary elements are almost sparse, but have
some residual noise among the small coefficients. Following [13], we propose to use a regularization
function ? encouraging neighbor pixels to be set to zero together, thus leading to a sparse structured
dictionary. We consider the collection G of all groups of variables corresponding to squares of 4
PK P
neighbor pixels in {1, . . . , m}. Then, we define ?(D) , ?1 j=1 g?G maxk?g |dj [k]|+?2 kDk2F ,
where dj is the j-th column of D. The penalty ? is a structured sparsity-inducing penalty that encourages groups of variables g to be set to zero together [13]. Its proximal operator can be computed
efficiently [20], and it is thus easy to use the surrogates (5). We set ?1 = 0.15 and ?2 = 0.01; after
trying a few values for ?1 and ?2 at a reasonable computational cost, we obtain dictionaries with the
desired regularization effect, as shown in Figure 3. Learning one dictionary of size K = 256 took
a few minutes when performing one pass on the training data with mini-batches of size 100. This
experiment demonstrates that our approach is more flexible and general than [13] and [19]. Note that
it is possible to show that when ?2 is large enough, the iterates Dn necessarily remain in a bounded
set, and thus our convergence analysis presented in Section 3.3 applies to this experiment.
Figure 3: Left: Two visualizations of 25 elements from a larger dictionary obtained by the toolbox
SPAMS [19]; the second view amplifies the small coefficients. Right: the corresponding views of
the dictionary elements obtained by our approach after initialization with the dictionary on the left.
5
Conclusion
In this paper, we have introduced a stochastic majorization-minimization algorithm that gracefully
scales to millions of training samples. We have shown that it has strong theoretical properties and
some practical value in the context of machine learning. We have derived from our framework
several new algorithms, which have shown to match or outperform the state of the art for solving
large-scale convex problems, and to open up new possibilities for non-convex ones. In the future,
we would like to study surrogate functions that can exploit the curvature of the objective function,
which we believe is a crucial issue to deal with badly conditioned datasets.
Acknowledgments
This work was supported by the Gargantua project (program Mastodons - CNRS).
8
References
[1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sci., 2(1):183?202, 2009.
[2] J.M. Borwein and A.S. Lewis. Convex analysis and nonlinear optimization. Springer, 2006.
[3] L. Bottou. Online algorithms and stochastic approximations. In David Saad, editor, Online Learning and
Neural Networks. 1998.
[4] L. Bottou and O. Bousquet. The trade-offs of large scale learning. In Adv. NIPS, 2008.
[5] O. Capp?e and E. Moulines. On-line expectation?maximization algorithm for latent data models. J. Roy.
Stat. Soc. B, 71(3):593?613, 2009.
[6] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. J. Mach.
Learn. Res., 10:2899?2934, 2009.
[7] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. J. Mach. Learn. Res., 9:1871?1874, 2008.
[8] G. Gasso, A. Rakotomamonjy, and S. Canu. Recovering sparse signals with non-convex penalties and DC
programming. IEEE T. Signal Process., 57(12):4686?4698, 2009.
[9] S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming.
Technical report, 2013.
[10] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Mach.
Learn., 69(2-3):169?192, 2007.
[11] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic
strongly-convex optimization. In Proc. COLT, 2011.
[12] C. Hu, J. Kwok, and W. Pan. Accelerated gradient methods for stochastic optimization and online learning. In Adv. NIPS, 2009.
[13] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In Proc. AISTATS, 2010.
[14] G. Lan. An optimal method for stochastic composite optimization. Math. Program., 133:365?397, 2012.
[15] K. Lange, D.R. Hunter, and I. Yang. Optimization transfer using surrogate objective functions. J. Comput.
Graph. Stat., 9(1):1?20, 2000.
[16] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. J. Mach. Learn. Res.,
10:777?801, 2009.
[17] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence
rate for finite training sets. In Adv. NIPS, 2012.
[18] J. Mairal. Optimization with first-order surrogate functions. In Proc. ICML, 2013. arXiv:1305.3120.
[19] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. J.
Mach. Learn. Res., 11:19?60, 2010.
[20] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In
Adv. NIPS, 2010.
[21] R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. Learning in graphical models, 89, 1998.
[22] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. Optimiz., 19(4):1574?1609, 2009.
[23] Y. Nesterov. Gradient methods for minimizing composite objective functions. Technical report, CORE
Discussion Paper, 2007.
[24] S. Shalev-Schwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv 1211.2717v1, 2012.
[25] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In Proc.
COLT, 2009.
[26] S. Shalev-Shwartz and A. Tewari. Stochastic methods for ?1 regularized loss minimization. In Proc.
ICML, 2009.
[27] A. W. Van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
[28] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference.
Found. Trends Mach. Learn., 1(1-2):1?305, 2008.
[29] S. Wright, R. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE T.
Signal Process., 57(7):2479?2493, 2009.
[30] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. J. Mach.
Learn. Res., 11:2543?2596, 2010.
9
| 5129 |@word mild:1 stronger:2 norm:4 open:1 hu:1 hsieh:1 mention:1 tr:1 recursively:2 liblinear:16 initial:1 series:1 tuned:1 ours:1 past:2 existing:2 outperforms:1 current:4 comparing:1 tackling:1 written:2 fn:17 numerical:2 plot:1 update:10 n0:6 juditsky:1 stationary:7 website:1 beginning:1 core:2 iterates:4 provides:3 math:1 zhang:2 dn:5 consists:4 introduce:5 indeed:6 expected:9 moulines:1 decreasing:4 automatically:1 cpu:1 encouraging:1 solver:4 increasing:1 project:3 provided:1 bounded:6 medium:3 interpreted:1 proposing:1 supplemental:2 guarantee:2 sapiro:1 every:2 concave:1 nutshell:1 exactly:1 k2:2 rm:2 demonstrates:1 schwartz:1 unit:1 before:1 local:3 limit:2 mach:7 approximately:1 inria:3 zeroth:1 initialization:1 resembles:1 suggests:1 challenging:1 factorization:8 nemirovski:1 averaged:2 practical:2 acknowledgment:1 testing:4 practice:4 regret:2 procedure:1 llipschitz:1 empirical:4 significantly:1 composite:3 matching:1 word:1 cannot:1 close:1 operator:1 context:3 risk:1 www:1 equivalent:1 deterministic:1 ghadimi:1 kale:2 convex:52 simplicity:1 splitting:1 roux:1 rule:2 attraction:1 regularize:1 proving:2 coordinate:1 updated:2 shamir:1 suppose:1 programming:11 element:4 roy:1 trend:1 particularly:1 updating:1 cut:1 observed:1 role:1 csie:1 wang:1 adv:4 decrease:1 trade:1 valuable:1 complexity:1 nesterov:1 trained:1 solving:7 f2:10 capp:1 easily:2 represented:2 various:1 regularizer:3 train:3 fast:1 choosing:3 shalev:3 larger:1 valued:1 statistic:3 g1:1 vaart:1 online:26 sequence:5 differentiable:6 took:1 propose:4 reconstruction:1 fr:2 tu:1 achieve:1 inducing:1 amplifies:1 convergence:28 incremental:4 converges:6 develop:3 stating:1 stat:2 measured:1 eq:1 strong:1 soc:1 implemented:2 recovering:1 involves:1 closely:1 functionality:1 stochastic:37 fobos:2 material:2 argued:1 f1:13 generalization:1 alleviate:1 ntu:1 proposition:14 rda:2 adjusted:1 extension:2 hold:1 wright:1 visualize:1 driving:1 achieves:2 dictionary:9 estimation:4 proc:5 lose:1 majorizing:2 weighted:4 minimization:17 offs:1 always:3 rather:1 pn:5 shrinkage:1 corollary:2 derived:1 ponce:1 s2l:2 likelihood:1 contrast:1 rigorous:1 inference:1 cnrs:1 interested:3 pixel:3 arg:6 issue:4 among:3 dual:3 denoted:1 pascal:1 flexible:1 classification:1 colt:2 art:4 constrained:1 initialize:1 aware:1 represents:1 k2f:1 icml:2 future:1 minimized:2 report:4 summand:1 grenoble:1 few:2 randomly:1 beck:1 replaced:1 n1:2 huge:1 possibility:2 introduces:1 yielding:1 devoted:2 amenable:1 smidas:1 nowak:1 necessary:2 desired:1 re:5 theoretical:2 instance:3 column:1 gn:13 teboulle:1 maximization:2 applicability:1 cost:9 rakotomamonjy:1 entry:1 uniform:2 gargantua:1 characterize:1 proximal:13 combined:1 density:1 siam:2 together:2 concrete:1 w1:2 again:1 squared:1 satisfied:1 borwein:1 choose:4 possibly:3 resort:1 derivative:2 leading:2 li:1 upperbound:1 de:1 sec:3 wk:4 includes:1 coefficient:3 coding:2 satisfy:2 explicitly:1 depends:1 later:1 view:4 lot:1 multiplicative:1 performed:1 analyze:1 hazan:2 bayes:1 option:9 majorization:12 contribution:1 minimize:1 publicly:1 square:1 efficiently:3 yield:1 directional:2 weak:1 hunter:1 mastodon:1 reach:1 cumbersome:1 suffers:1 definition:3 against:1 pp:3 proof:2 dataset:15 proved:1 popular:1 knowledge:2 lim:1 organized:1 jenatton:2 appears:1 follow:1 methodology:1 improved:1 formulation:1 done:1 though:1 strongly:11 langford:1 hand:1 working:1 nonlinear:1 logistic:5 quality:1 believe:2 building:1 name:1 k22:14 concept:1 true:1 effect:1 regularization:9 iteratively:3 neal:1 deal:3 reweighted:2 encourages:1 trying:1 demonstrate:3 performs:2 l1:3 bring:1 duchi:1 image:1 variational:2 fi:6 permuted:1 behaves:1 rl:1 million:1 discussed:2 cambridge:1 rd:1 unconstrained:1 fk:3 canu:1 pointed:1 dj:2 f0:2 curvature:1 sl0:1 inf:2 store:1 nonconvex:2 inequality:1 success:1 yi:3 exploited:1 der:1 seen:1 minimum:3 additional:2 surely:2 monotonically:1 signal:7 living:1 ntr:1 smooth:4 technical:3 match:2 offer:1 long:1 lin:1 bach:4 equally:2 plugging:1 scalable:1 regression:4 basic:1 whitened:1 variant:1 expectation:5 arxiv:2 iteration:12 represent:1 agarwal:1 gforge:1 wnn:1 want:1 crucial:1 saad:1 rest:1 sure:1 pass:2 ascent:1 flow:1 sridharan:1 effectiveness:1 jordan:1 call:3 yang:1 split:2 easy:2 wn:22 enough:2 iterate:2 lange:1 idea:1 whether:1 motivated:1 gb:2 penalty:4 proceed:1 remark:1 matlab:1 useful:2 tewari:1 clear:1 involve:2 slk:1 amount:2 http:3 shapiro:1 sl:8 exist:1 outperform:1 arising:1 group:2 lan:3 drawn:3 libsvm:2 backward:1 v1:2 ram:1 asymptotically:1 imaging:1 graph:1 sum:2 run:2 package:1 parameterized:4 inverse:1 almost:4 throughout:1 saying:1 reasonable:3 family:1 patch:1 draw:2 bound:3 fan:1 badly:1 precisely:2 constraint:4 afforded:1 software:1 optimiz:1 bousquet:1 speed:1 min:12 performing:3 subgradients:1 separable:3 rcv1:8 kddb:3 structured:8 according:2 kd:1 remain:1 slightly:2 em:4 pan:1 tw:1 making:1 restricted:1 visualization:1 remains:2 previously:2 turn:2 cjlin:1 singer:1 end:1 available:4 kwok:1 batch:17 alternative:1 schmidt:1 slower:1 rp:2 ensure:1 graphical:2 exploit:1 k1:2 build:1 approximating:1 classical:8 objective:27 intend:1 g0:1 surrogate:39 cycling:1 exhibit:1 gradient:24 link:1 berlin:1 sci:1 gracefully:1 sl1:2 trivial:1 reason:3 devel:1 assuming:1 mini:2 kk:1 minimizing:9 difficult:2 unfortunately:1 gk:2 negative:1 implementation:3 design:1 unknown:1 perform:1 allowing:1 upper:1 datasets:6 finite:7 descent:6 truncated:2 defining:1 extended:1 maxk:1 team:1 hinton:1 dc:19 introduced:3 david:1 pair:1 required:2 toolbox:2 nip:4 address:3 able:2 beyond:2 below:3 usually:2 regime:4 sparsity:2 challenge:1 program:3 max:1 memory:3 webspam:8 wainwright:1 suitable:3 natural:1 regularized:4 largescale:2 scarce:1 residual:1 scheme:20 imply:1 library:1 julien:2 lk:2 concludes:1 gasso:1 epoch:14 literature:4 l2:5 asymptotic:2 law:1 loss:7 proven:1 srebro:1 validation:1 nte:1 xiao:1 principle:4 thresholding:1 editor:1 storing:1 compatible:1 summary:1 supported:1 figueiredo:1 allow:1 wide:1 neighbor:2 barrier:1 sparse:17 ghz:1 benefit:1 van:1 xn:10 stand:1 concretely:2 collection:2 forward:1 spam:5 approximate:5 compact:2 smm:14 preferred:1 ml:1 global:1 mairal:5 conclude:1 wn2:1 leader:1 xi:4 shwartz:2 continuous:6 latent:2 iterative:1 table:1 nature:1 learn:7 transfer:1 robust:1 improving:1 alg:3 investigated:1 necessarily:1 bottou:2 aistats:1 pk:2 dense:1 main:2 motivation:1 subsample:1 wni:4 noise:1 intel:1 fashion:2 precision:2 downhill:1 explicit:1 exponential:2 comput:1 rk:1 remained:1 minute:1 load:2 jensen:1 r2:1 admits:1 consist:3 exists:1 niques:1 conditioned:1 justifies:1 sparseness:1 horizon:5 kx:1 logarithmic:2 partially:2 chang:1 applies:3 springer:1 minimizer:1 lewis:1 obozinski:2 targeted:1 lear:1 lipschitz:8 replace:1 experimentally:1 fista:2 infinite:5 reducing:1 uniformly:4 averaging:11 principal:1 total:1 called:1 pas:2 experimental:2 diverging:1 support:1 accelerated:1 evaluate:1 ex:3 |
4,564 | 513 | Hierarchical Transformation of Space in the
Visual System
Alexandre Pouget
Stephen A. Fisher
Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute
La Jolla, CA 92037
Abstract
Neurons encoding simple visual features in area VI such as orientation,
direction of motion and color are organized in retinotopic maps. However, recent physiological experiments have shown that the responses of
many neurons in VI and other cortical areas are modulated by the direction of gaze. We have developed a neural network model of the visual
cortex to explore the hypothesis that visual features are encoded in headcentered coordinates at early stages of visual processing. New experiments
are suggested for testing this hypothesis using electrical stimulations and
psychophysical observations.
1
Introduction
Early visual processing in cortical areas VI, V2 and MT appear to encode visual
features in eye-centered coordinates. This is based primarily on anatomical data
and recordings from neurons in these areas, which are arranged in retinotopic maps.
In addition, when neurons in the visual cortex are electrically stimulated [9], the
direction of the evoked eye movement depends only on the retinotopic position of
the stimulation site, as shown in figure 1. Thus, when a position corresponding
to the left part of the visual field is stimulated, the eyes move toward the left (left
figure), and eye movements in the opposite direction are induced if neurons on the
right side are stimulated (right figure).
412
Hierarchical Transformation of Space in the Visual System
u
u
.......... .. ...............
.,"
....'
.:.
",,-
L.:
\'.
'.
.\
."
?
....
-
I
1.
.\ R
L
~i
-=~~~-l R
\
\,
--.-... ,"
~
D
D
~
End Point of the Saccade
Figure 1: Eye Movements Evoked by Electrical Stimulations in V1
A variety of psychophysical experiments provide further evidence that simple visual features are organized according to retinal coordinates rather than spatiotopic
coordinates [10, 5].
At later stages of visual processing the receptive fields of neurons become very large
and in the posterior parietal cortex, containing areas believed to be important for
sensory-motor coordination (LIP, VIP and 7a), the visual responses of neurons are
modulated by both eye and head position [1, 2]. A previous model of the parietal
cortex showed that the gain fields of the neurons observed there are consistent with
a distributed spatial transformation from retinal to head-centered coordinates [14].
Recently, several investigators have found that static eye position also modulates
the visual response of many neurons at early stages of visual processing, including
the LGN, V1 and V3a [3, 6, 13, 12]. Furthermore, the modulation appears to be
qualitatively similar to that previously reported in the parietal cortex and could
contribute to those responses. These new findings suggest that coordinate transformations from retinal to spatial representations could be initiated much earlier than
previously thought.
We have used network optimization techniques to study the spatial transformations
in a feedforward hierarchy of cortical maps. The goals of the model were 1) to
determine whether the modulation of neural responses with eye position as observed
in V1 or V3a is sufficient to provide a head-centered coordinate frame, 2) to help
interpret data based on the electrical stimulation of early visual areas, and 3) to
provide a framework for designing experiments and testing predictions.
2
2.1
Methods
Network Task
The task of the network was to compute the head-centered coordinates of objects.
If E is the eye position vector and R is the vector for the retinal position of the
413
414
Pouget, Fisher, and Sejnowski
Head -CeG1ered PCllli&iClll
, 1"-. 1"-.
lLlL
H
~
~
OlJtput
v
Hiddea. aaye" 2
o
H
v
H
0
V
E~ PosiUCIIl
Rcciaa
Figure 2: Network Architecture
object, then the head-centered position
P is given by:
(1)
A two layer network with linear units can solve this problem. However, the goal of
our study was not to find the optimal architecture for this task, but to explore the
types of intermediate representation developed in a multilayer network of non-linear
units and to compare these results with physiological recordings.
2.2
Network Architecture
We trained a partially-connected multilayer network to compute the head-centered
position of objects from retinal and eye position signals available at the input layer.
Weights were shared within each hidden layer [7] and adjusted with the backpropagation algorithm [11]. All simulations were performed with the SN2 simulator
developed by Botou and LeCun.
In the hierarchical architecture illustrated in figure 2, the sizes of the receptive
fields were restricted in each layer and several hidden units were dedicated to each
location, typically 3 to 5 units, depending on the layer. Although weights were
shared between locations within a layer, each type of hidden unit was allowed to
develop its own receptive field properties. This architecture preserves two essential
aspects of the visual cortex: 1) restricted receptive fields organized in retinotopic
maps and 2) the sizes of the receptive fields increase with distance from the retina.
Training examples consisted of an eye position vector and a gaussian pattern of
activity placed at a particular location on the input layer and these were system-
Hierarchical Transformation of Space in the Visual System
IViIuI c.ta Ana 7A I
00@
IViIuI c.ta: Ana VJe I
I
???
[;J
???
[iJ
I: :
IIIWNal..,..3
(!)8@
@@@
8.@
???
???
0<30
(!)(!)@
?
?
Im.... La,..2
I
~
?? 8
?
S@@
?? @
???
I:~I
? ?
?
?? ?
? ? ?
??
??? ?? ??
I
???
@Il.
(!X!X!)
a:
~
Figure 3: Spatial Gain Fields: Comparison Between Hidden Units and Cortical
Neurons (background activity not shown for V3a neurons)
atically varied throughout the training. For some trials there were no visual inputs
and the output layer was trained to reproduce the eye position.
2.3
Electrical Stimulation Experiments
Determining the head-centered position of an object is equivalent to computing
the position of the eye required to foveate the object (Le. for a foveated object
R = 0, which, according to equation 1, implies that P = E). Thus, the output
of our network can be interpreted as the eye position for an intended saccadic eye
movement to acquire the object.
For the electrical stimulation experiments we followed the protocol suggested by
Goodman and Andersen [4] in an earlier study of the Zipser-Andersen model of
parietal cortex [14]. The cortical model was stimulated by clamping the activity of
a set of hidden units at a location in one of the layers to 1, their maximum values,
and setting all visual inputs to o. The changes in the activity of the output units
were computed and interpreted as an intended saccade.
3
Results
We trained several networks with various numbers of hidden units per layer and
found that they all converged to a nearly perfect solution in a few thousand sweeps
through the training set.
3.1
Comparison Between Hidden Units and Cortical Neurons
The influence of eye position on the visual response of a cortical neuron is typically
assessed by finding the visual stimulus eliciting the best response and measuring the
gain of this response at nine different eye fixations [1]. The responses are plotted as
circles with diameters proportional to activity and the set of nine circles is called
the spatial gain field of a unit. We adopted the same procedure for studying the
hidden units in the model.
415
416
Pouget, Fisher, and Sejnowski
Figure 4: Eye Movements Evoked by Stimulating the Retina
The units in a fully-developed network have properties that are similar to those
observed in cortical neurons (figure 3). Despite having restricted receptive fields,
the overall activity of most units increased monotonically in one direction of eye
position, each unit having a different preferred direction in head-centered space.
Also, the inner and outer circles, corresponding to the visual activity and the overall
activity (visual plus background) did not always increase along the same direction
due to the nonlinear sigmoid squashing function of the unit.
3.2
Significance of the Spatial Gains Fields
Each hidden layer of the network has a retinotopic map but also contains spatiotopic
(i.e. head-centered) information through the spatial gain fields. We call these
retinospatiotopic maps.
At each location on a map, R is implicitly encoded by the position of a unit on
the map, and E is provided by the inputs from the eye position units. Thus,
each location contains all the information needed to recover P, the head-centered
coordinate. Therefore, all of the visual features in the map, such as orientation or
color, are encoded in head-centered coordinates. This suggests that some visual
representations in Vl and V3a may be retinospatiotopic.
3.3
Electrical Stimulation Experiments
Can electrical stimulation experiments distinguish between a purely retinotopic
map, like the retina, and retinospatiotopic maps, like each of the hidden layers?
When input units in the retina are stimulated, the direction of the evoked movement
is determined by the location of the stimulation site on the map (figure 4), as
expected from a purely retinotopic map. For example, stimulating units in the upper
Hierarchical Transformation of Space in the Visual System
+++
+. . +' +. . ,
+'"
.
.,
.
,,, "
........
,
.....
...
"
'.
\
/
\
\
-. .--.... .
,
...........
\
,
\
.
\
\
~?~
,\
\
\
G~~
:$;
,.
@~
@@@
@@@
Hidden layer 2
Hidden layer 3
One Hidden Unit Type Stimulated
Figure 5: Eye Movements Evoked by Stimulating one Hidden Unit Type
left corner of the map produces an output in the upper left direction, regardless of
initial eye position.
There were several types of hidden units at each spatial position of a hidden layer.
When the hidden units were stimulated independently, the pattern of induced eye
movements was no longer a function solely of the location of the stimulation (figure 5). Other factors, such as the preferred head-centered direction of the stimulated cell, were also important. Hence, the intermediate maps were not purely
retinotopic.
If all the hidden units present at one location in a hidden layer were activated
together, the pattern of outputs resembled the one obtained by stimulating the
input layer (figure 6). Even though each hidden unit has a different preferred
head-centered direction, when simultaneously activated, these directions balanced
out and the dominant factor became the location of the stimulation.
Strong electrical stimulation in area VI of the visual cortex is likely to recruit many
neurons whose receptive fields share the same retinal location. This might explain
why McIlwain [9] observed eye movements in directions that depended only on the
position of the stimulation site. In higher visual areas with weaker retinotopy, it
might be possible to obtain patterns closer to those produced by stimulating only
one type of hidden unit. Such patterns of eye movements have already been observed
in parietal area LIP [4].
4I7
418
Pouget, Fisher, and Sejnowski
\."
W
;:;~~
./ ;/
*
~+
*~
m
'\
..
,
--
~
/
,
"", ,,;
~
~."
-1#
$"'-
-* *
\~
,,"
'\
...
'......,-
-
/
' 1
;~/
/
,
.-
~~
"
~ itt ~
, ,
./
,,..," "
/
.~
/"
,"
~".
Hidden layer 3
Hidden layer 2
All Hidden Unit Types Stimulated
Figure 6: Eye Movements Evoked by Stimulating all Hidden Unit Types
4
Discussion and Predictions
The analysis of our hierarchical model shows that the gain modulation of visual
responses observed at early stages of visual processing are consistent with the hypothesis that low-level visual features are encoded in head-centered coordinates.
What experiments could confirm this hypothesis?
Electrical stimulation cannot distinguish between a retinotopic and a retinospatiotopic representation unless the number of neurons stimulated is small or restricted
to those with similar gain fields. This might be possible in an intermediate level of
processing, such as area V3a.
Most psychophysical experiments have been designed to test for purely headcentered maps [10, 5] and not for retinotopic maps receiving a static eye position
signal. New experiments are needed that look for interactions between eye position
and visual features. For example, it should be possible to obtain motion aftereffects
that are dependent on eye position; that is, an aftereffect in which the direction
of motion depends on the gaze direction. John Mayhew [8] has already reported
this type of gaze-dependent aftereffect for rotation, which is probably represented
at later stages of visual processing. Similar experiments with translational motion
could probe earlier levels of visual processing.
If information on spatial location is already present in area VI, the primary visual
area that projects to other areas of the visual cortex in primates, then we need to
re-evaluate the representation of objects in visual cortex. In the model presented
here, the spatial location of an object was encoded along with its other features in
a distributed fashion; hence spatial location should be considered on equal footing
with other features of an object. Such early spatial transformations would affect
Hierarchical Transformation of Space in the Visual System
other aspects of visual processing, such as visual attention and object recognition,
and may also be important for nonspatial tasks, such as shape constancy (John
Mayhew, personal communication).
References
[1] R.A. Andersen, G.K. Essick, and R.M. Siegel. Encoding of spatial location by
posterior parietal neurons. Science, 230:456-458, 1985.
[2] P.R. Brotchie and R.A. Andersen. A body-centered coordinate system in posterior parietal cortex. In Neurosc. Abst., page 1281, New Orleans, 1991.
[3] C. Galleti and P.P. Battaglini. Gaze-dependent visual neurons in area v3a of
monkey prestriate cortex. J. Neurosc., 9:1112-1125, 1989.
[4] S.J. Goodman and R.A. Andersen. Microstimulations of a neural network
model for visually guided saccades. J. Cog. Neurosc., 1:317-326, 1989.
[5] D.E. Irwin, J.1. Zacks, and J .S. Brown. Visual memory and the perception of
a stable visual environment. Perc. Psychophy., 47:35-46, 1990.
[6] R. Lal and M.J. Freedlander. Gating of retinal transmission by afferent eye
position and movement signals. Science, 243:93-96, 1989.
[7] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, and 1.D. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:540-566, 1990.
[8] J.E.W. Mayhew. After-effects of movement contingent on direction of gaze.
Vision Res., 13:877-880, 1973.
[9] J .T. Mc Ilwain. Saccadic eye movements evoked by electrical stimulation of
the cat visual cortex. Visual Neurosc., 1:135-143, 1988.
[10] J.K. O'Regan and A. Levy-Schoen. Integrating visual information from successive fixations: does trans-saccadic fusion exist? Vision Res., 23:765-768,
1983.
[11] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, and
the PDP Research Group, editors, Parallel Distributed Processing, volume 1,
chapter 8, pages 318-362. MIT Press, Cambridge, MA, 1986.
[12] Y. Trotter, S. Celebrini, S.J. Thorpe, and Imbert M. Modulation of stereoscopic
processing in primate visual cortex vI by the distance of fixation . In Neurosc.
Abs., New-Orleans, 1991.
[13] T.G. Weyand and J.G. Malpeli. Responses of neurons in primary visual cortex
are influenced by eye position. In Neurosc. Abs., page 419.7, St Louis, 1990.
[14] D. Zipser and R.A. Andersen. A back-propagation programmed network that
stimulates reponse properties of a subset of posterior parietal neurons. Nature,
331:679-684, 1988.
419
| 513 |@word trial:1 schoen:1 trotter:1 simulation:1 initial:1 contains:2 john:2 shape:1 motor:1 zacks:1 designed:1 nonspatial:1 footing:1 contribute:1 location:15 successive:1 along:2 become:1 fixation:3 expected:1 simulator:1 provided:1 retinotopic:10 project:1 what:1 interpreted:2 monkey:1 recruit:1 developed:4 finding:2 transformation:9 unit:29 appear:1 louis:1 depended:1 despite:1 encoding:2 initiated:1 solely:1 modulation:4 might:3 plus:1 evoked:7 suggests:1 programmed:1 lecun:2 testing:2 orleans:2 backpropagation:2 procedure:1 area:14 thought:1 integrating:1 suggest:1 cannot:1 influence:1 equivalent:1 map:17 williams:1 regardless:1 attention:1 independently:1 pouget:4 weyand:1 coordinate:12 hierarchy:1 designing:1 hypothesis:4 rumelhart:2 recognition:2 observed:6 constancy:1 electrical:10 thousand:1 connected:1 movement:14 balanced:1 environment:1 personal:1 trained:3 purely:4 headcentered:2 various:1 represented:1 cat:1 chapter:1 sejnowski:4 whose:1 encoded:5 solve:1 abst:1 interaction:1 malpeli:1 transmission:1 produce:1 perfect:1 object:11 help:1 depending:1 develop:1 ij:1 mayhew:3 strong:1 implies:1 direction:16 guided:1 centered:15 ana:2 im:1 adjusted:1 vje:1 considered:1 visually:1 early:6 coordination:1 jackel:1 mit:1 gaussian:1 always:1 rather:1 encode:1 dependent:3 vl:1 typically:2 hidden:25 reproduce:1 lgn:1 overall:2 translational:1 orientation:2 spatial:13 field:14 equal:1 having:2 essick:1 look:1 nearly:1 stimulus:1 primarily:1 retina:4 few:1 thorpe:1 preserve:1 simultaneously:1 intended:2 ab:2 henderson:1 activated:2 closer:1 unless:1 circle:3 plotted:1 re:3 increased:1 earlier:3 measuring:1 subset:1 reported:2 spatiotopic:2 stimulates:1 st:1 terrence:1 receiving:1 gaze:5 together:1 andersen:6 containing:1 perc:1 corner:1 retinal:7 afferent:1 vi:6 depends:2 later:2 performed:1 mcilwain:1 recover:1 parallel:1 il:1 became:1 handwritten:1 produced:1 mc:1 converged:1 explain:1 influenced:1 static:2 gain:8 color:2 organized:3 back:1 appears:1 alexandre:1 ta:2 higher:1 response:11 reponse:1 arranged:1 though:1 furthermore:1 stage:5 nonlinear:1 propagation:2 effect:1 consisted:1 brown:1 hence:2 laboratory:1 illustrated:1 imbert:1 motion:4 dedicated:1 recently:1 sigmoid:1 rotation:1 stimulation:15 mt:1 celebrini:1 volume:1 interpret:1 cambridge:1 stable:1 cortex:15 longer:1 dominant:1 posterior:4 own:1 recent:1 showed:1 jolla:1 contingent:1 zip:1 determine:1 monotonically:1 signal:3 stephen:1 believed:1 prediction:2 multilayer:2 vision:2 cell:1 addition:1 background:2 goodman:2 probably:1 recording:2 induced:2 call:1 zipser:2 feedforward:1 intermediate:3 variety:1 affect:1 architecture:5 opposite:1 inner:1 i7:1 whether:1 v3a:6 nine:2 sn2:1 mcclelland:1 diameter:1 exist:1 stereoscopic:1 per:1 anatomical:1 group:1 prestriate:1 v1:3 throughout:1 layer:19 followed:1 distinguish:2 activity:8 aspect:2 according:2 electrically:1 primate:2 restricted:4 equation:1 aftereffect:3 previously:2 needed:2 vip:1 end:1 adopted:1 available:1 studying:1 probe:1 llll:1 hierarchical:7 v2:1 denker:1 neurosc:6 eliciting:1 psychophysical:3 move:1 sweep:1 already:3 receptive:7 saccadic:3 primary:2 distance:2 outer:1 toward:1 code:1 acquire:1 upper:2 neuron:20 observation:1 howard:1 parietal:8 neurobiology:1 communication:1 head:15 hinton:1 frame:1 pdp:1 varied:1 required:1 lal:1 boser:1 trans:1 suggested:2 pattern:5 perception:1 including:1 memory:1 eye:32 determining:1 fully:1 regan:1 proportional:1 sufficient:1 consistent:2 editor:1 share:1 squashing:1 placed:1 side:1 weaker:1 institute:1 distributed:3 cortical:8 sensory:1 qualitatively:1 preferred:3 implicitly:1 confirm:1 why:1 stimulated:10 lip:2 nature:1 itt:1 ca:1 protocol:1 did:1 significance:1 allowed:1 body:1 site:3 siegel:1 fashion:1 salk:1 position:27 levy:1 cog:1 resembled:1 gating:1 physiological:2 evidence:1 fusion:1 essential:1 modulates:1 foveated:1 clamping:1 explore:2 likely:1 visual:49 partially:1 saccade:3 foveate:1 ma:1 stimulating:6 goal:2 shared:2 fisher:4 retinotopy:1 change:1 determined:1 called:1 la:2 internal:1 modulated:2 assessed:1 irwin:1 investigator:1 evaluate:1 |
4,565 | 5,130 | Robust Transfer Principal Component Analysis
with Rank Constraints
Yuhong Guo
Department of Computer and Information Sciences
Temple University, Philadelphia, PA 19122, USA
[email protected]
Abstract
Principal component analysis (PCA), a well-established technique for data analysis and processing, provides a convenient form of dimensionality reduction that is
effective for cleaning small Gaussian noises presented in the data. However, the
applicability of standard principal component analysis in real scenarios is limited
by its sensitivity to large errors. In this paper, we tackle the challenge problem of
recovering data corrupted with errors of high magnitude by developing a novel
robust transfer principal component analysis method. Our method is based on the
assumption that useful information for the recovery of a corrupted data matrix can
be gained from an uncorrupted related data matrix. Specifically, we formulate the
data recovery problem as a joint robust principal component analysis problem on
the two data matrices, with common principal components shared across matrices
and individual principal components specific to each data matrix. The formulated
optimization problem is a minimization problem over a convex objective function
but with non-convex rank constraints. We develop an efficient proximal projected
gradient descent algorithm to solve the proposed optimization problem with convergence guarantees. Our empirical results over image denoising tasks show the
proposed method can effectively recover images with random large errors, and significantly outperform both standard PCA and robust PCA with rank constraints.
1
Introduction
Dimensionality reduction, as an important form of unsupervised learning, has been widely explored
for analyzing complex data such as images, video sequences, text documents, etc. It has been used to
discover important latent information about observed data matrices for visualization, feature recovery, embedding and data cleaning. The fundamental assumption roots in dimensionality reduction
is that the intrinsic structure of high dimensional observation data lies on a low dimensional linear
subspace. Principal component analysis (PCA) [7] is a classic and one of most commonly used dimensionality reduction method. It seeks the best low-rank approximation of the given data matrix
under a well understood least-squares reconstruction loss, and projects data onto uncorrelated low
dimensional subspace. Moreover, it admits an efficient procedure for computing optimal solutions
via the singular value decomposition. These properties make PCA a well suited reduction method
when the observed data is mildly corrupted with small Gaussian noise [12]. But standard PCA is
very sensitive to the high magnitude errors of the observed data. Even a small fraction of large
errors can cause severe degradation in PCA?s estimate of the low rank structure.
Real-life data, however, is often corrupted with large errors or even missing observations. To tackle
dimensionality reduction with arbitrarily large errors and outliers, a number of approaches that robustify PCA have been developed in the literature, including ?1 -norm regularized robust PCA [14],
influence function techniques [5, 13], and alternating ?1 -norm minimization [8]. Nevertheless, the
1
capacity of these approaches on recovering the low-rank structure of a corrupted data matrix can
still be degraded with the increasing of the fraction of the large errors.
In this paper, we propose a novel robust transfer principal component analysis method to recover the
low rank representation of heavily corrupted data by leveraging related uncorrupted auxiliary data.
Seeking knowledge transfer from a related auxiliary data source for the target learning problem has
been popularly studied in supervised learning. It is also known that modeling related data sources
together provides rich information for discovering theirs shared subspace representations [4]. We
extend such a transfer learning scheme into the PCA framework to perform joint robust principal
component analysis over a corrupted target data matrix and a related auxiliary source data matrix
by enforcing the two robust PCA operations on the two data matrices to share a subset of common principal components, while maintaining their unique variations through individual principal
components specific for each data matrix. This robust transfer PCA framework combines aspects
of both robust PCA and transfer learning methodologies. We expect the critical low rank structure
shared between the two data matrices can be effectively transferred from the uncorrupted auxiliary
data to recover the low dimensional subspace representation of the heavily corrupted target data
in a robust manner. We formulate this robust transfer PCA as a joint minimization problem over
a convex combination of least squares losses with non-convex matrix rank constraints. Though a
simple relaxation of the matrix rank constraints into convex nuclear norm constraints can lead to a
convex optimization problem, it is very difficult to control the rank of the low-rank representation
matrix we aim to recover through the nuclear norm. We thus develop a proximal projected gradient
descent optimization algorithm to solve the proposed optimization problem with rank constraints,
which permits a convenient closed-form solution for each proximal step based on singular value
decomposition and converges to a stationary point. Our experiments over image denoising tasks
show the proposed method can effectively recover images corrupted with random large errors, and
significantly outperform both standard PCA and robust PCA with rank constraints.
Notations: In this paper, we use In to denote an n ? n identify matrix, use On,m to denote an n ? m
matrix with all 0 values, use k ? kF to denote the matrix Frobenius norm, and use k ? k? to denote the
nuclear norm (trace norm).
2
Preliminaries
Assume we are given an observed data matrix X ? Rn?d consisting of n observations of ddimensional feature vectors, which was generated by corrupting some entries of a latent low-rank
matrix M ? Rn?d with an error matrix E ? Rn?d such that X = M + E. We aim to to recover
the low-rank matrix M by projecting the high dimensional observations X into a low dimensional
manifold representation matrix Z ? Rn?k over the low dimensional subspace B ? Rk?d , such that
M = ZB, BB ? = Ik for k < d.
2.1
PCA
Given the above setup, standard PCA assumes the error matrix E contains small i.i.d. Gaussian
noises, and seeks optimal low dimensional encoding matrix Z and basis matrix B to reconstruct X
by X = ZB + E. Under a least squares reconstruction loss, PCA is equivalent to the following
self-supervised regression problem
min kX ? ZBk2F s.t. BB ? = Ik .
(1)
Z,B
That is, standard PCA seeks the best rank-k estimate of the latent low-rank matrix M = ZB by
solving
min kX ? M k2F s.t. rank(M ) ? k.
(2)
M
Although the optimization problem in (1) or (2) is not convex and does not appear to be easy, it can
be efficiently solved by performing a singular value decomposition (SVD) over X, and permits the
following closed-form solution
B ? = Vk? , Z ? = XB ? , M ? = Z ? B ? ,
(3)
where Vk is comprised of the top k right singular vectors of X. With the convenient solution, standard PCA has been widely used for modern data analysis and serves as an efficient and effective
dimensionality reduction procedure when the error E is small and i.i.d. Gaussian [7].
2
2.2
Robust PCA
The validity of standard PCA however breaks down when corrupted errors in the observed data
matrix are large. Note that even a single grossly corrupted entry in the observation matrix X can
render the recovered M ? matrix to be shifted away from the true low-rank matrix M . To recover
the intrinsic low-rank matrix M from the observation matrix X corrupted with sparse large errors
E, a polynomial-time robust PCA method has been developed in [14], which induces the following
optimization problem
min rank(M ) + ?kEk0
M,E
s.t. X = M + E.
(4)
By relaxing the non-convex rank function and the ?0 -norm into their convex envelopes of nuclear
norm and ?1 -norm respectively, a convex relaxation of the robust PCA can be yielded
min kM k? + ?kEk1
M,E
s.t. X = M + E.
(5)
With an appropriate choice of ? parameter, one can exactly recover the M, E matrices that generated
the observations X by solving this convex program.
To produce a scalable optimization for robust PCA, a more convenient relaxed formulation has been
considered in [14]
min kM k? + ?kEk1 +
M,E
?
kM + E ? Xk2F
2
(6)
where the original equality constraint is replaced with a reconstruction loss penalty term. This formulation apparently seeks the lowest rank M that can best reconstruct the observation matrix X
subjecting to sparse errors E.
Robust PCA though can effectively recover the low-rank matrix given very sparse large errors in the
observed data, its performance can be degraded when the observation data is heavily corrupted with
dense large errors. In this work, we propose to tackle this problem by exploiting information from
related uncorrupted auxiliary data.
3
Robust Transfer PCA
Exploring labeled information in a related auxiliary data set to assist the learning problem on a target
data set has been widely studied in supervised learning scenarios within the context of transfer learning, domain adaptation and multi-task learning [10]. Moreover, it has also been shown that modeling
related data sources together can provide useful information for discovering their shared subspace
representations in an unsupervised manner [4]. The principle behind these knowledge transfer learning approaches is that related data sets can complement each other on identifying the intrinsic latent
structure shared between them.
Following this transfer learning scheme, we present a robust transfer PCA method for recovering
low-rank matrix from a heavily corrupted observation matrix. Assume we are given a target data
matrix Xt ? Rnt ?d corrupted with errors of large magnitude, and a related source data matrix
Xs ? Rns ?d . The robust transfer PCA aims to achieve the following robust joint matrix factorization
X s = Ns B c + Z s B s + Es ,
X t = Nt B c + Z t B t + Et ,
(7)
(8)
where Bc ? Rkc ?d is the orthogonal basis matrix shared between the two data matrices, Bs ? Rks ?d
and Bt ? Rkt ?d are the orthogonal basis matrices specific to each data matrix respectively,
Ns ? Rns ?kc , Nt ? Rnt ?kc , Zs ? Rns ?ks , Zt ? Rnt ?kt are the corresponding low dimensional reconstruction coefficient matrices, Es ? Rns ?d and Et ? Rnt ?d represent the additive errors in each
data matrix. Let Zc = [Ns ; Nt ]. Given constant matrices As = [Ins , Ons ,nt ] and At = [Ont ,ns , Int ],
we can re-express Ns and Nt in term of the unified matrix Zc such that Ns = As Zc and Nt = At Zc .
The learning problem of robust transfer PCA can then be formulated as the following joint minimiza3
tion problem
min
Zc ,Zs ,Zt ,Bc ,Bs ,Bt ,Es ,Et
s.t.
?s
kAs Zc Bc + Zs Bs + Es ? Xs k2F +
2
?t
kAt Zc Bc + Zt Bt + Et ? Xt k2F + ?s kEs k1 + ?t kEt k1
2
Bc Bc? = Ikc , Bs Bs? = Iks , Bt Bt? = Ikt
(9)
which minimizes the least squares reconstruction losses on both data matrices with ?1 -norm regularizers over the additive error matrices. The intuition is that by sharing the common column basis
vectors in Bc , one can best capture the common intrinsic low-rank structure of the data based on limited observations from both data sets, and by allowing data embedding onto individual basis vectors
Bs , Bt , the additional low-rank structure specific to each data set can be captured. Nevertheless, this
is a difficult non-convex optimization problem as both the objective function and the orthogonality
constraints are non-convex. To simplify this optimization problem, we introduce replacements
Mc = Z c B c , Ms = Z s B s , Mt = Z t B t
and rewrite the optimization problem (9) equivalently into the formulation below
?s
?t
min
kAs Mc + Ms + Es ? Xs k2F + kAt Mc + Mt + Et ? Xt k2F
Mc ,Ms ,Mt ,Es ,Et
2
2
+ ?s kEs k1 + ?t kEt k1
s.t. rank(Mc ) ? kc , rank(Ms ) ? ks , rank(Mt ) ? kt
(10)
(11)
which has a ?1 -norm regularized convex objective function, but is subjecting to non-convex inequality rank constraints. A standard convexification of the rank constraints is to replace rank functions
with their convex envelopes, nuclear norms [3, 14, 1, 6, 15]. For example, one can replace the rank
constraints in (11) with relaxed nuclear norm regularizers in the objective function
?t
?s
min
kAs Mc + Ms + Es ? Xs k2F + kAt Mc + Mt + Et ? Xt k2F
(12)
Mc ,Ms ,Mt ,Es ,Et
2
2
+ ?s kEs k1 + ?t kEt k1 + ?c kMc k? + ?s kMs k? + ?t kMt k?
Many efficient and scalable algorithms have been proposed to solve such nuclear norm regularized convex optimization problems, including proximal gradient algorithm [6, 14], fixed point and
Bregman iterative method [9]. However, though the nuclear norm is a convex envelope of the rank
function, it is not always a high-quality approximation of the rank function [11]. Moreover, it is very
difficult to select the appropriate trade-off parameters ?s , ?t for the nuclear norm regularizers in (12)
to recover the low-rank matrix solutions in the original optimization in (11). In principal component
analysis problems it is much more convenient to have explicit control on the rank of the low-rank solution matrices. Therefore instead of solving the nuclear norm based convex optimization problem
(12), we develop a scalable and efficient proximal gradient algorithm to solve the rank constraint
based minimization problem (11) directly, which is shown to converge to a stationary point.
After solving the optimization problem (11), the low-rank approximation of the corrupted matrix Xt
? t = At Mc + Mt .
can be obtained as X
4
Proximal Projected Gradient Descent Algorithm
Proximal gradient methods have been popularly used for unconstrained convex optimization problems with continuous but non-smooth regularizers [2]. In this work, we develop a proximal projected
gradient algorithm to solve the non-convex optimization problem with matrix rank constraints in
(11). Let ? = [Mc ; Ms ; Mt ; Es ; Et ] be the optimization variable set of (11). We denote the objective function of (11) as F (?) such that
F (?) = f (?) + g(?)
?s
?t
f (?) =
kAs Mc + Ms + Es ? Xs k2F + kAt Mc + Mt + Et ? Xt k2F
2
2
g(?) = ?s kEs k1 + ?t kEt k1
4
(13)
(14)
(15)
Algorithm 1 Proximal Projected Gradient Descent
Input: data matrices Xs , Xt , parameters ?s , ?t , ?s , ?t , kc , ks , kt .
Set ? = 3 max(?s , ?t ), k = 1.
(1)
(1)
(1)
(1)
(1)
Initialize Mc , Ms , Mt , Es , Et
While not converged do
(k)
(k)
(k)
(k)
(k)
? Set ?(k) = [Mc ; Ms ; Mt ; Es ; Et ].
(k+1)
(k+1)
(k+1)
= pMt (?, ?(k) ),
= pMs (?, ?(k) ), Mt
? Update Mc
= pMc (?, ?(k) ), Ms
(k+1)
(k+1)
= pEt (?, ?(k) ).
Es
= pEs (?, ?(k) ), Et
? Set k = k + 1.
End While
Here f (?) is a convex and continuously differentiable function while g(?) is a convex but nonsmooth function. For any ? > 0, we consider the following quadratic approximation of F (?) at a
e = [M
fc ; M
fs ; M
ft ; E
es ; E
et ]
given point ?
e = f (?)
e + h? ? ?,
e ?f (?)i
e + ? k? ? ?k
e 2F + g(?)
Q? (?, ?)
(16)
2
e is the gradient of the function f (?) at point ?.
e Let C = {? : rank(Mc ) ? kc ,
where ?f (?)
e can be conducted as
rank(Ms ) ? ks , rank(Mt ) ? kt }. The minimization over Q? (?, ?)
2
?
1
e
e
e
e
p(?, ?) = arg min Q? (?, ?) = arg min g(?) +
? ? (? ? ?f (?)) F
(17)
2
?
??C
??C
which admits the following closed-form solution through singular value decomposition and softthresholding:
e = Uk ?k Vk? ,
pMc (?, ?)
c
c
c
e
fc ? 1 ?M f (?))
for U ?V ? = SVD(M
c
?
e = Uk ?k V ? ,
pMt (?, ?)
t kt
t
ft ? 1 ?M f (?))
e
for U ?V ? = SVD(M
t
?
e = Uk ?k Vk? ,
pMs (?, ?)
s
s
s
e = (|E
cs | ?
pEs (?, ?)
e = (|E
ct | ?
pEt (?, ?)
?s
? )+
?t
? )+
fs ? 1 ?M f (?))
e
for U ?V ? = SVD(M
s
?
cs ),
? sign(E
ct ),
? sign(E
e
cs = E
fs ? 1 ?E f (?)
with E
s
?
e
ct = E
ft ? 1 ?E f (?)
with E
t
?
where Uk and Vk denote the top k singular vectors from U and V respectively, and ?k denotes
the diagonal matrix with the corresponding top k singular values for k ? {kc , ks , kt } respectively;
e
the operator ??? denotes matrix Hadamard product, and the operator (?)+ = max(?, 0); ?Mc f (?),
e denote parts of the gradient matrix ?f (?)
e corree and ?E f (?)
e ?E f (?),
e ?M f (?),
?Ms f (?),
t
s
t
sponding to Mc , Ms , Mt , Es , Et respectively.
Our proximal projected gradient algorithm is an iterative procedure. After first initializing the parameter matrices to zeros, in each k-th iteration, it updates the model parameters by minimizing the
approximation function Q(?, ?(k) ) at the given point ?(k) , using the closed-form update equations
above. The overall procedure is given in Algorithm 1. Below we discuss the convergence property
of this algorithm.
e for every feasible ?, ?.
e
Lemma 1 For ? = 3 max(?s , ?t ), we have F (?) ? Q? (?, ?)
Proof: First it is easy to check that ? = 3 max(?s , ?t ) is a Lipschitz constant of ?f (?), such that
e F ? ?k? ? ?k
e F
k?f (?) ? ?f (?)k
e
for any feasible pair ?, ?
(18)
Thus f (?) is a continuously differentiable function with Lipschitz continuous gradient and Lipschitz
constant ?. Following [2, Lemma 2.1], we have
e 2F for any feasible pair ?, ?
e
e + h? ? ?,
e ?f (?)i
e + ? k? ? ?k
(19)
f (?) ? f (?)
2
5
Based on (16) and (19), we can then derive
e 2 + g(?) = Q? (?, ?)
e
e + h? ? ?,
e ?f (?)i
e + ? k? ? ?k
F (?) = f (?) + g(?) ? f (?)
F
2
(20)
Based on this lemma, we can see the update steps of Algorithm 1 satisfy
F (?(k+1) ) ? Q? (?(k+1) , ?(k) ) ? Q? (?(k) , ?(k) ) = F (?(k) )
(21)
Therefore the sequence of points, ?(1) , ?(2) , . . . , ?? , produced by Algorithm 1 have nonincreasing
function values F (?(1) ) ? F (?(2) ) ? . . . ? F (?? ), and converge to a stationary point.
5
Experiments
We evaluate the proposed approach using image denoising tasks constructed on the Yale Face
Database, which contains 165 grayscale images of 15 individuals. There are 11 images per subject,
one per different facial expression or configuration.
Our goal is to investigate the performance of the proposed approach on recovering data corrupted
with large and dense errors. Thus we constructed noisy images by adding large errors. Let Xt0 denote
a target image matrix from one subject, which has values between 0 and 255. We randomly select
a fraction of its pixels to add large errors to reach value 255, where the fraction of noisy pixels is
controlled using a noise level parameter ?. The obtained noisy target image matrix is Xt . We then
use an uncorrupted image matrix Xs0 from the same or different subject as the source matrix to help
? t . In the experiments,
the image denoising of Xt by recovering its low-rank approximation matrix X
we compared the performance of the following methods on image denoising with large errors:
? R-T-PCA: This is the proposed robust transfer PCA method. For this method, we used
parameters ?s = ?t = 1, ?s = ?t = 0.1, unless otherwise specified.
? R-S-PCA: This is a robust shared PCA method that applies a rank-constrained version of
the robust PCA in [14] on the concatenated matrix [Xs0 ; Xt ] to recover a low-rank approx? t with rank kc + kt .
imation matrix X
? R-PCA: This is a robust PCA method that applies a rank-constrained version of the robust
? t with rank kc + kt .
PCA in [14] on Xt to recover a low-rank approximation matrix X
? S-PCA: This is a shared PCA method that applies PCA on concatenated matrix [Xs0 ; Xt ] to
? t with rank kc + kt .
recover a low-rank approximation matrix X
? PCA: This method applies PCA on the noisy target matrix Xt to recover a low-rank ap? t with rank kc + kt .
proximation matrix X
? R-2Step-PCA: This method exploits the auxiliary source matrix by first performing robust
PCA over the concatenated matrix [Xs0 ; Xt ] to produce a shared matrix Mc with rank kc ,
and then performing robust PCA over the residue matrix (Xt ? At Mc ) to produce a matrix
? t = At Mc + Mt .
Mt with rank kt . The final low-rank approximation of Xt is given by X
All the methods are evaluated using the root mean square error (RMSE) between the true target
? t recovered from the noisy image matrix.
image matrix Xt0 and the low-rank approximation matrix X
Unless specified otherwise, we used kc = 8, ks = 3, kt = 3 in all experiments.
5.1
Intra-Subject Experiments
We first conducted experiments by constructing 15 transfer tasks for the 15 subjects. Specifically, for
each subject, we used the first image matrix as the target matrix and used each of the remaining 10
image matrices as the source matrix each time. For each source matrix, we repeated the experiments
5 times by randomly generating noisy target matrix using the procedure described above. Thus in
total, for each experiment, we have results from 50 runs. The average denoising results in terms of
root mean square error (RMSE) with noise level ? = 5% are reported in Table 1. The standard deviations for these results range between 0.001 and 0.015. We also present one visualization result for
Task-1 in Figure 1. We can see that the proposed method R-T-PCA outperforms all other methods
6
Table 1: The average denoising results in terms of RMSE at noise level ? = 5%.
Tasks
Task-1
Task-2
Task-3
Task-4
Task-5
Task-6
Task-7
Task-8
Task-9
Task-10
Task-11
Task-12
Task-13
Task-14
Task-15
R-T-PCA
0.143
0.134
0.136
0.140
0.142
0.156
0.172
0.203
0.140
0.198
0.191
0.151
0.193
0.176
0.159
R-S-PCA
0.185
0.167
0.153
0.162
0.166
0.195
0.206
0.222
0.159
0.209
0.211
0.189
0.218
0.201
0.170
R-PCA
0.218
0.201
0.226
0.201
0.241
0.196
0.300
0.223
0.203
0.259
0.283
0.194
0.277
0.240
0.266
Source
Target
10
10
20
20
20
30
30
30
40
40
40
50
50
60
60
20
30
40
50
R-2Step-PCA
0.212
0.202
0.215
0.215
0.208
0.202
0.264
0.243
0.201
0.255
0.274
0.213
0.257
0.224
0.245
50
60
60
R?S?PCA
PCA
0.365
0.353
0.430
0.406
0.414
0.310
0.523
0.386
0.349
0.439
0.423
0.366
0.474
0.392
0.464
Noise: 5%
10
10
R?T?PCA
S-PCA
0.330
0.320
0.386
0.369
0.382
0.290
0.477
0.348
0.317
0.394
0.389
0.337
0.436
0.366
0.413
10
20
30
40
50
60
R?PCA
10
20
30
40
50
60
S?PCA
PCA
R?2Step?PCA
10
10
10
10
10
10
20
20
20
20
20
20
30
30
30
30
30
30
40
40
40
40
40
40
50
50
50
50
50
60
60
10
20
30
40
50
60
60
10
20
30
40
50
60
60
10
20
30
40
50
60
50
60
10
20
30
40
50
60
60
10
20
30
40
50
60
10
20
30
40
50
60
Figure 1: Denoising results on Task-1.
across all the 15 tasks. The comparison between the two groups of methods, {R-T-PCA, R-S-PCA,
S-PCA} and {R-PCA, PCA}, shows that a related source matrix is indeed useful for denoising the
target matrix. The superior performance of R-T-PCA over R-2Step-PCA demonstrates the effectiveness of our joint optimization framework over its stepwise alternative. The superior performance
of R-T-PCA over R-S-PCA and S-PCA demonstrates the efficacy of our transfer PCA framework
in exploiting the auxiliary source matrix over methods that simply concatenate the auxiliary source
matrix and target matrix.
5.2
Cross-Subject Experiments
Next, we conducted transfer experiments using source matrix and target matrix from different subjects. We randomly constructed 5 transfer tasks, Task-6-1, Task-8-2, Task-9-4, Task-12-8 and Task14-11, where the first number in the task name denotes the source subject index and second number
denotes the target subject index. For example, to construct Task-6-1, we used the first image matrix
from subject-6 as the source matrix and used the first image matrix from subject-1 as the target matrix. For each task, we conducted experiments with two different noise levels, 5% and 10%. We repeated each experiment 10 times using randomly generated noisy target matrix. The average results
in terms of RMSE are reported in Table 2 with standard deviations less than 0.015. We can see that
with the increase of noise level, the performance of all methods degrades. But at each noise level, the
comparison results are similar to what we observed in previous experiments: The proposed method
outperforms all other methods. These results also suggest that even a remotely related source image
can be useful. All these experiments demonstrate the efficacy of the proposed method in exploiting
uncorrupted auxiliary data matrix for denoising target images corrupted with large errors.
7
Table 2: The average denoising results in terms of RMSE.
Tasks
?=5%
?=10%
?=5%
?=10%
?=5%
?=10%
?=5%
?=10%
?=5%
?=10%
Task-6-1
Task-8-2
Task-9-4
Task-12-8
Task-14-11
0.19
R-T-PCA
0.147
0.203
0.132
0.154
0.148
0.204
0.207
0.244
0.172
0.319
R-S-PCA
0.177
0.246
0.159
0.211
0.170
0.240
0.231
0.272
0.215
0.368
0.5
R?T?PCA
Average RMSE
0.18
Average RMSE
R-PCA
0.224
0.326
0.234
0.323
0.229
0.344
0.245
0.359
0.274
0.431
0.17
0.16
0.4
S-PCA
0.337
0.490
0.313
0.457
0.373
0.546
0.373
0.518
0.403
0.592
PCA
0.370
0.526
0.354
0.500
0.410
0.585
0.397
0.548
0.424
0.612
R-2Step-PCA
0.218
0.291
0.200
0.276
0.212
0.282
0.249
0.317
0.268
0.372
R?T?PCA
R?S?PCA
R?PCA
S?PCA
PCA
R?2Step?PCA
0.3
0.2
0.15
0 0.1
0.25
0.5
? Value
(6,3,3)
1
(8,3,3)
(8,5,5)
(10,3,3)
Different settings of k , k , k
c
s
(10,5,5)
t
Figure 2: Parameter analysis on Task-6-1 with ? = 5%.
5.3
Parameter Analysis
The optimization problem (11) for the proposed R-T-PCA method has a number of parameters to
be set: ?s , ?t , ?s , ?t , kc , ks and kt . To investigate the influence of these parameters over the performance of the proposed method, we conducted two experiments using the first cross-subject
task, Task-6-1, with noise level ? = 5%. Given that the source and target matrices are similar
in size, in these experiments we set ?s = ?t = 1, ?s = ?t and ks = kt . In the first experiment, we
set (kc , ks , kt ) = (8, 3, 3) and study the performance of R-T-PCA with different ?s = ?t = ? values, for ? ? {0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1}. The average RMSE results over 10 runs are presented in the left sub-figure of Figure 2. We can see that R-T-PCA is quite robust to ? within
the range of values, {0.05, 0.1, 0.25, 0.5, 1}. In the second experiment, we fixed ?s = ?t = 0.1
and compared R-T-PCA with other methods across a few different settings of (kc , ks , kt ), with
(kc , ks , kt ) ? {(6, 3, 3), (8, 3, 3), (8, 5, 5), (10, 3, 3), (10, 5, 5)}. The average comparison results in
terms of RMSE are presented in the right sub-figure of Figure 2. We can see that though the performance of all methods varies across different settings, R-T-PCA is less sensitive to the parameter
changes comparing to the other methods and it produced the best result across different settings.
6
Conclusion
In this paper, we developed a novel robust transfer principal component analysis method to recover
the low-rank representation of corrupted data by leveraging related uncorrupted auxiliary data. This
robust transfer PCA framework combines aspects of both robust PCA and transfer learning methodologies. We formulated this method as a joint minimization problem over a convex combination of
least squares losses with non-convex matrix rank constraints, and developed a proximal projected
gradient descent algorithm to solve the proposed optimization problem, which permits a convenient
closed-form solution for each proximal step based on singular value decomposition and converges to
a stationary point. Our experiments over image denoising tasks demonstrated the proposed method
can effectively exploit auxiliary uncorrupted image to recover images corrupted with random large
errors and significantly outperform a number of comparison methods.
8
References
[1] F. Bach. Consistency of trace norm minimization. Journal of Machine Learning Research,
9:1019?1048, 2008.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. Imaging Sciences, 2, No. 1:183?202, 2009.
[3] M Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford University, 2002.
[4] S. Gupta, D. Phung, B. Adams, and S. Venkatesh. Regularized nonnegative shared subspace
learning. Data Mining and Knowledge Discovery, 26:57?97, 2013.
[5] P. Huber. Robust Statistics. New York, New York, 1981.
[6] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In Proc. of
International Conference on Machine Learning (ICML), 2009.
[7] I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, New York, 1986.
[8] Q. Ke and T. Kanade. Robust l1 norm factorization in the presence of outliers and missing data
by alternative convex programming. In Proc. of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2005.
[9] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank
minimization. Mathematical Programming, 2009.
[10] S. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and
Data Engineering, 2010.
[11] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Review, 52, no 3:471?501, 2010.
[12] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statistical Society, B, 6(3):611?622, 1999.
[13] F. De La Torre and M. Black. A framework for robust subspace learning. International Journal
of Computer Vision (IJCV), 54(1-3):117?142, 2003.
[14] J. Wright, Y. Peng, Y. Ma, A. Ganesh, and S. Rao. Robust principal component analysis:
Exact recovery of corrupted low-rank matrices by convex optimization. In Advances in Neural
Information Processing Systems (NIPS), 2009.
[15] X. Zhang, Y. Yu, M. White, R. Huang, and D. Schuurmans. Convex sparse coding, subspace
learning, and semi-supervised extensions. In Proc. of AAAI Conference on Artificial Intelligence (AAAI), 2011.
9
| 5130 |@word version:2 polynomial:1 norm:22 km:4 seek:4 decomposition:5 reduction:7 configuration:1 contains:2 efficacy:2 document:1 bc:7 outperforms:2 recovered:2 ka:4 nt:6 comparing:1 concatenate:1 additive:2 update:4 stationary:4 intelligence:1 discovering:2 provides:2 zhang:1 mathematical:1 rnt:4 constructed:3 ik:2 ijcv:1 combine:2 introduce:1 manner:2 peng:1 huber:1 indeed:1 multi:1 ont:1 increasing:1 project:1 discover:1 moreover:3 notation:1 lowest:1 what:1 minimizes:1 z:3 developed:4 unified:1 guarantee:1 every:1 tackle:3 exactly:1 demonstrates:2 uk:4 control:2 appear:1 understood:1 engineering:1 encoding:1 analyzing:1 ap:1 black:1 studied:2 k:11 relaxing:1 limited:2 factorization:2 range:2 fazel:2 unique:1 kat:4 procedure:5 empirical:1 remotely:1 significantly:3 convenient:6 suggest:1 onto:2 operator:2 context:1 influence:2 equivalent:1 demonstrated:1 missing:2 convex:28 survey:1 formulate:2 ke:1 recovery:4 identifying:1 nuclear:11 embedding:2 classic:1 variation:1 target:20 heavily:4 cleaning:2 programming:2 exact:1 pa:1 recognition:1 labeled:1 convexification:1 observed:7 ft:3 database:1 solved:1 capture:1 initializing:1 trade:1 intuition:1 solving:4 rewrite:1 basis:5 ikc:1 joint:7 fast:1 effective:2 artificial:1 quite:1 widely:3 solve:6 stanford:1 cvpr:1 reconstruct:2 otherwise:2 statistic:1 noisy:7 final:1 pmt:2 sequence:2 differentiable:2 reconstruction:5 propose:2 product:1 adaptation:1 hadamard:1 achieve:1 frobenius:1 exploiting:3 convergence:2 produce:3 generating:1 adam:1 converges:2 help:1 derive:1 develop:4 recovering:5 auxiliary:12 ddimensional:1 c:3 popularly:2 torre:1 preliminary:1 kmc:1 exploring:1 extension:1 considered:1 wright:1 proc:3 sensitive:2 minimization:11 gaussian:4 always:1 aim:3 shrinkage:1 vk:5 rank:67 check:1 bt:6 kc:16 pixel:2 arg:2 overall:1 constrained:2 initialize:1 construct:1 yu:1 unsupervised:2 k2f:9 icml:1 subjecting:2 nonsmooth:1 simplify:1 few:1 modern:1 randomly:4 individual:4 beck:1 replaced:1 consisting:1 replacement:1 investigate:2 mining:1 intra:1 severe:1 behind:1 regularizers:4 xb:1 nonincreasing:1 kt:17 bregman:2 facial:1 orthogonal:2 unless:2 re:1 column:1 modeling:2 teboulle:1 temple:2 rao:1 applicability:1 deviation:2 subset:1 entry:2 comprised:1 conducted:5 reported:2 varies:1 corrupted:21 proximal:12 recht:1 fundamental:1 sensitivity:1 siam:2 international:2 probabilistic:1 off:1 together:2 continuously:2 rkt:1 thesis:1 aaai:2 huang:1 ket:4 parrilo:1 ikt:1 de:1 coding:1 coefficient:1 int:1 satisfy:1 tion:1 root:3 break:1 closed:5 apparently:1 recover:16 rmse:9 square:7 degraded:2 efficiently:1 identify:1 xk2f:1 produced:2 mc:21 converged:1 reach:1 sharing:1 grossly:1 proof:1 knowledge:4 dimensionality:6 tipping:1 supervised:4 methodology:2 formulation:3 evaluated:1 though:4 robustify:1 ganesh:1 quality:1 name:1 usa:1 validity:1 xs0:4 true:2 ye:1 equality:1 alternating:1 goldfarb:1 white:1 self:1 m:14 demonstrate:1 l1:1 image:25 novel:3 common:4 superior:2 mt:16 ji:1 extend:1 theirs:1 approx:1 unconstrained:1 consistency:1 pm:2 kmt:1 etc:1 add:1 scenario:2 verlag:1 inequality:1 arbitrarily:1 life:1 uncorrupted:8 captured:1 minimum:1 additional:1 relaxed:2 converge:2 semi:1 smooth:1 cross:2 bach:1 controlled:1 scalable:3 regression:1 vision:2 rks:1 iteration:1 represent:1 sponding:1 residue:1 singular:8 source:18 envelope:3 subject:13 leveraging:2 effectiveness:1 presence:1 yang:1 easy:2 kek0:1 expression:1 pca:91 assist:1 penalty:1 render:1 f:3 york:4 cause:1 useful:4 induces:1 outperform:3 shifted:1 sign:2 per:2 express:1 group:1 nevertheless:2 kes:4 imaging:1 relaxation:2 fraction:4 run:2 inverse:1 ct:3 guaranteed:1 yale:1 quadratic:1 yielded:1 phung:1 nonnegative:1 constraint:16 orthogonality:1 aspect:2 min:10 performing:3 transferred:1 department:1 developing:1 combination:2 across:5 pan:1 b:6 outlier:2 projecting:1 equation:2 visualization:2 discus:1 jolliffe:1 imation:1 serf:1 end:1 operation:1 permit:3 away:1 appropriate:2 alternative:2 softthresholding:1 original:2 top:3 assumes:1 denotes:4 remaining:1 maintaining:1 exploit:2 concatenated:3 k1:8 society:1 seeking:1 objective:5 iks:1 degrades:1 diagonal:1 gradient:14 subspace:9 capacity:1 manifold:1 enforcing:1 pet:2 index:2 minimizing:1 equivalently:1 difficult:3 setup:1 pmc:2 trace:3 zt:3 perform:1 allowing:1 observation:11 descent:5 rn:8 complement:1 pair:2 venkatesh:1 specified:2 established:1 nip:1 below:2 pattern:1 challenge:1 program:1 kek1:2 including:2 max:4 video:1 royal:1 critical:1 regularized:4 scheme:2 philadelphia:1 text:1 review:1 literature:1 discovery:1 kf:1 loss:6 expect:1 principle:1 thresholding:1 corrupting:1 uncorrelated:1 share:1 zc:7 face:1 sparse:4 rich:1 commonly:1 projected:7 transaction:1 bb:2 ons:1 proximation:1 grayscale:1 continuous:2 latent:4 iterative:4 table:4 kanade:1 transfer:24 robust:38 schuurmans:1 complex:1 constructing:1 domain:1 dense:2 noise:11 repeated:2 n:6 sub:2 explicit:1 lie:1 pe:2 rk:1 down:1 specific:4 yuhong:2 xt:16 bishop:1 explored:1 x:6 admits:2 gupta:1 intrinsic:4 stepwise:1 adding:1 effectively:5 gained:1 phd:1 magnitude:3 kx:2 mildly:1 chen:1 suited:1 fc:2 simply:1 xt0:2 applies:4 springer:1 ma:2 goal:1 formulated:3 shared:10 replace:2 feasible:3 lipschitz:3 change:1 specifically:2 denoising:12 principal:17 degradation:1 zb:3 lemma:3 total:1 svd:4 e:15 la:1 select:2 guo:1 accelerated:1 evaluate:1 |
4,566 | 5,131 | Online Robust PCA via Stochastic Optimization
Huan Xu
ME Department
National University of Singapore
[email protected]
Jiashi Feng
ECE Department
National University of Singapore
[email protected]
Shuicheng Yan
ECE Department
National University of Singapore
[email protected]
Abstract
Robust PCA methods are typically based on batch optimization and have to load
all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA
(OR-PCA) that processes one sample per time instance and hence its memory cost
is independent of the number of samples, significantly enhancing the computation
and storage efficiency. The proposed OR-PCA is based on stochastic optimization
of an equivalent reformulation of the batch RPCA. Indeed, we show that OR-PCA
provides a sequence of subspace estimations converging to the optimum of its
batch counterpart and hence is provably robust to sparse corruption. Moreover,
OR-PCA can naturally be applied for tracking dynamic subspace. Comprehensive
simulations on subspace recovering and tracking demonstrate the robustness and
efficiency advantages of the OR-PCA over online PCA and batch RPCA methods.
1
Introduction
Principal Component Analysis (PCA) [19] is arguably the most widely used method for dimensionality reduction in data analysis. However, standard PCA is brittle in the presence of outliers and corruptions [11]. Thus many techniques have been developed towards robustifying it [12, 4, 24, 25, 7].
One prominent example is the Principal Component Pursuit (PCP) method proposed in [4] that robustly finds the low-dimensional subspace through decomposing the sample matrix into a low-rank
component and an overall sparse component. It is proved that both components can be recovered
exactly through minimizing a weighted combination of the nuclear norm of the first term and `1
norm of the second one. Thus the subspace estimation is robust to sparse corruptions.
However, PCP and other robust PCA methods are all implemented in a batch manner. They need
to access every sample in each iteration of the optimization. Thus, robust PCA methods require
memorizing all samples, in sharp contrast to standard PCA where only the covariance matrix is
needed. This pitfall severely limits their scalability to big data, which are becoming ubiquitous
now. Moreover, for an incremental samples set, when a new sample is added, the optimization
procedure has to be re-implemented on all available samples. This is quite inefficient in dealing with
incremental sample sets such as network detection, video analysis and abnormal events tracking.
Another pitfall of batch robust PCA methods is that they cannot handle the case where the underlying
subspaces are changing gradually. For example, in the video background modeling, the background
is assumed to be static across different frames for applying robust PCA [4]. Such assumption is too
restrictive in practice. A more realistic situation is that the background is changed gradually along
1
with the camera moving, corresponding to a gradually changing subspace. Unfortunately, traditional
batch RPCA methods may fail in this case.
In order to efficiently and robustly estimate the subspace of a large-scale or dynamic samples set,
we propose an Online Robust PCA (OR-PCA) method. OR-PCA processes only one sample per
time instance and thus is able to efficiently handle big data and dynamic sample sets, saving the
memory cost and dynamically estimating the subspace of evolutional samples. We briefly explain
our intuition here. The major difficulty of implementing the previous RPCA methods, such as PCP,
in an online fashion is that the adopted nuclear norm tightly couples the samples and thus the samples
have to be processed simultaneously. To tackle this, OR-PCA pursues the low-rank component in
a different manner: using an equivalent form of the nuclear norm, OR-PCA explicitly decomposes
the sample matrix into the multiplication of the subspace basis and coefficients plus a sparse noise
component. Through such decomposition, the samples are decoupled in the optimization and can be
processed separately. In particular, the optimization consists of two iterative updating components.
The first one is to project the sample onto the current basis and isolate the sparse noise (explaining
the outlier contamination), and the second one is to update the basis given the new sample.
Our main technical contribution is to show the above mentioned iterative optimization sheme converges to the global optimal solution of the original PCP formulation, thus we establish the validity
of our online method. Our proof is inspired by recent results from [16], who proposed an online dictionary learning method and provided the convergence guarantee of the proposed online dictionary
learning method. However, [16] can only guarantee that the solution converges to a stationary point
of the optimization problem.
Besides the nice behavior on single subspace recovering, OR-PCA can also be applied for tracking
time-variant subspace naturally, since it updates the subspace estimation timely after revealing one
new sample. We conduct comprehensive simulations to demonstrate the advantages of OR-PCA for
both subspace recovering and tracking in this work.
2
Related Work
The robust PCA algorithms based on nuclear norm minimization to recover low-rank matrices are
now standard, since the seminal works [21, 6]. Recent works [4, 5] have taken the nuclear norm
minimization approach to the decomposition of a low-rank matrix and an overall sparse matrix.
Different from the setting of samples being corrupted by sparse noise, [25, 24] and [7] solve robust
PCA in the case that a few samples are completely corrupted. However, all of these RPCA methods
are implemented in batch manner and cannot be directly adapted to the online setup.
There are only a few pieces of work on online robust PCA [13, 20, 10], which we discuss below.
In [13], an incremental and robust subspace learning method is proposed. The method proposes
to integrate the M -estimation into the standard incremental PCA calculation. Specifically, each
newly coming data point is re-weighted by a pre-defined influence function [11] of its residual
to the current estimated subspace. However, no performance guarantee is provided in this work.
In [20], a compressive sensing based recursive robust PCA algorithm is proposed. The proposed
method essentially solves compressive sensing optimization over a small batch of data to update the
principal components estimation instead of using a single sample, and it is not clear how to extend
the method to the latter case. Recently, He et al. propose an incremental gradient descent method
on Grassmannian manifold for solving the robust PCA problem, named GRASTA [10]. In each
iteration, GRASTA uses the gradient of the updated augmented Lagrangian function after revealing
a new sample to perform the gradient descent. However, no theoretic guarantee of the algorithmic
convergence for GRASTA is provided in this work. Moreover, in the experiments in this work, we
show that our proposed method is more robust than GRASTA to the sparse corruption and achieves
higher breakdown point.
The most closely related work to ours in technique is [16], which proposes an online learning method
for dictionary learning and sparse coding. Based on that work, [9] proposes an online nonnegative
matrix factorization method. Both works can be seen as solving online matrix factorization problems
with specific constraints (sparse or non-negative). Though OR-PCA can also be seen as a kind of
matrix factorization, it is essentially different from those two works. In OR-PCA, an additive sparse
noise matrix is considered along with the matrix factorization. Thus the optimization and analysis
2
are different from the ones in those works. In addition, benefitting from explicitly considering the
noise, OR-PCA is robust to sparse contamination, which is absent in either the dictionary learning or
nonnegative matrix factorization works. Most importantly, in sharp contrast to [16, 9] which shows
their methods converge to a stationary point, our method is solving essentially a re-formulation of a
convex optimization, and hence we can prove that the method converges to the global optimum.
After this paper was accepted, we found similar works which apply the same main idea of combining
the online learning framework in [16] with the factorization formulation of nuclear norm was published in [17, 18, 23] before. However, in this work, we use different optimization from them. More
specifically, our proposed algorithm needs not determine the step size or solve a Lasso subproblem.
3
3.1
Problem Formulation
Notation
We use bold letters to denote vectors. In particular, x ? Rp denotes an authentic sample without
corruption, e ? Rp is for the noise, and z ? Rp is for the corrupted observation z = x + e. Here p
denotes the ambient dimension of the observed samples. Let r denote the intrinsic dimension of the
subspace underlying {xi }ni=1 . Let n denote the number of observed samples, t denote the index of
the sample/time instance. We use capital letters to denote matrices, e.g., Z ? Rp?n is the matrix of
observed samples. Each column zi of Z corresponds
P to one sample. For an arbitrary real matrix E,
Let kEkF denote its Frobenius norm, kEk`1 = i,j |Eij | denote the `1 -norm of E seen as a long
P
vector in Rp?n , and kEk? = i ?i (E) denote its nuclear norm, i.e., the sum of its singular values.
3.2
Objective Function Formulation
Robust PCA (RPCA) aims to accurately estimate the subspace underlying the observed samples,
even though the samples are corrupted by gross but sparse noise. As one of the most popular RPCA
methods, the Principal Component Pursuit (PCP) method [4] proposes to solve RPCA by decomposing the observed sample matrix Z into a low-rank component X accounting for the low-dimensional
subspace plus an overall sparse component E incorporating the sparse corruption. Under mild conditions, PCP guarantees that the two components X and E can be exactly recovered through solving:
1
min kZ ? X ? Ek2F + ?1 kXk? + ?2 kEk1 .
X,E 2
(1)
To solve the problem in (1), iterative optimization methods such as Accelerated Proximal Gradient
(APG) [15] or Augmented Lagrangian Multiplier (ALM) [14] methods are often used. However,
these optimization methods are implemented in a batch manner. In each iteration of the optimization,
they need to access all samples to perform SVD. Hence a huge storage cost is incurred when solving
RPCA for big data (e.g., web data, large image set).
In this paper, we consider online implementation of PCP. The main difficulty is that the nuclear norm
couples all the samples tightly and thus the samples cannot be considered separately as in typical
online optimization problems. To overcome this difficulty, we use an equivalent form of the nuclear
norm for the matrix X whose rank is upper bounded by r, as follows [21],
1
1
2
2
T
kXk? =
inf
kLk
+
kRk
:
X
=
LR
.
F
F
2
2
L?Rp?r ,R?Rn?r
Namely, the nuclear norm is re-formulated as an explicit low-rank factorization of X. Such nuclear
norm factorization is developed in [3] and well established in recent works [22, 21]. In this decomposition, L ? Rp?r can be seen as the basis of the low-dimensional subspace and R ? Rn?r denotes
the coefficients of the samples w.r.t. the basis. Thus, the RPCA problem (1) can be re-formulated as
min
X,L?Rp?r ,R?Rn?r ,E
1
?1
kZ ? X ? Ek2F + (kLk2F + kRk2F ) + ?2 kEk1 , s.t. X = LRT .
2
2
Substituting X by LRT and removing the constraint, the above problem is equivalent to:
min
L?Rp?r ,R?Rn?r ,E
1
?1
kZ ? LRT ? Ek2F + (kLk2F + kRk2F ) + ?2 kEk1 .
2
2
3
(2)
Though the reformulated objective function is not jointly convex w.r.t. the variables L and R, we
prove below that the local minima of (2) are global optimal solutions to original problem in (1). The
details are given in the next section.
Given a finite set of samples Z = [z1 , . . . , zn ] ? Rp?n , solving problem (2) indeed minimizes the
following empirical cost function,
n
fn (L) ,
1X
?1
`(zi , L) +
kLk2F ,
n i=1
2n
(3)
where the loss function for each sample is defined as
1
?1
`(zi , L) , min kzi ? Lr ? ek22 + krk22 + ?2 kek1 .
r,e 2
2
(4)
The loss function measures the representation error for the sample z on a fixed basis L, where the
coefficients on the basis r and the sparse noise e associated with each sample are optimized to
minimize the loss. In the stochastic optimization, one is usually interested in the minimization of
the expected cost overall all the samples [16],
f (L) , Ez [`(z, L)] = lim fn (L),
(5)
n??
where the expectation is taken w.r.t. the distribution of the samples z. In this work, we first establish
a surrogate function for this expected cost and then optimize the surrogate function for obtaining the
subspace estimation in an online fashion.
4
Stochastic Optimization Algorithm for OR-PCA
We now present our Online Robust PCA (OR-PCA) algorithm. The main idea is to develop a
stochastic optimization algorithm to minimize the empirical cost function (3), which processes one
sample per time instance in an online manner. The coefficients r, noise e and basis L are optimized
in an alternative manner. In the t-th time instance, we obtain the estimation of the basis Lt through
minimizing the cumulative loss w.r.t. the previously estimated coefficients {ri }ti=1 and sparse noise
{ei }ti=1 . The objective function for updating the basis Lt is defined as,
t
gt (L) ,
1X
t i=1
1
?1
kzi ? Lri ? ei k22 + kri k22 + ?2 kei k1
2
2
+
?1
kLk2F .
2t
(6)
This is a surrogate function of the empirical cost function ft (L) defined in (3), i.e., it provides an
upper bound for ft (L): gt (L) ? ft (L).
The proposed algorithm is summarized in Algorithm 1. Here, the subproblem in (7) involves solving
a small-size convex optimization problem, which can be solved efficiently by the off-the-shelf solver
(see the supplementary material). To update the basis matrix L, we adopt the block-coordinate
descent with warm restarts [2]. In particular, each column of the basis L is updated individually
while fixing the other columns.
The following theorem is the main theoretic result of the paper, which states that the solution from
Algorithm 1 will converge to the optimal solution of the batch optimization. Thus, the proposed
OR-PCA converges to the correct low-dimensional subspace even in the presence of sparse noise,
as long as the batch version ? PCP ? works.
Theorem 1. Assume the observations are always bounded. Given the rank of the optimal solution
to (5) is provided as r, and the solution Lt ? Rp?r provided by Algorithm 1 is full rank, then Lt
converges to the optimal solution of (5) asymptotically.
Note that the assumption that observations are bounded is quite natural for the realistic data (such as
images, videos). We find in the experiments that the final solution Lt is always full rank. A standard
stochastic gradient descent method may further enhance the computational efficiency, compared
with the used method here. We leave the investigation for future research.
4
Algorithm 1 Stochastic Optimization for OR-PCA
Input: {z1 , . . . , zT } (observed data which are revealed sequentially), ?1 , ?2 ? R (regularization
parameters), L0 ? Rp?r , r0 ? Rr , e0 ? Rp (initial solutions), T (number of iterations).
for t = 1 to T do
1) Reveal the sample zt .
2) Project the new sample:
1
?1
{rt , et } = arg min kzt ? Lt?1 r ? ek22 + krk22 + ?2 kek1 .
2
2
(7)
3) At ? At?1 + rt rTt , Bt ? Bt?1 + (zt ? et )rTt .
4) Compute Lt with Lt?1 as warm restart using Algorithm 2:
1
Lt , arg min Tr LT (At + ?1 I) L ? Tr(LT Bt ).
2
(8)
end for
Return XT = LT RTT (low-rank data matrix), ET (sparse noise matrix).
Algorithm 2 The Basis Update
Input: L = [l1 , . . . , lr ] ? Rp?r , A = [a1 , . . . , ar ] ? Rr?r , and B = [b1 , . . . , br ] ? Rp?r .
A? ? A + ?1 I.
for j = 1 to r do
lj ?
1
(bj ? L?
aj ) + lj .
?
Aj,j
(9)
end for
Return L.
5
Proof Sketch
In this section we sketch the proof of Theorem 1. The details are deferred to the supplementary
material due to space limit.
The proof of Theorem 1 proceeds in the following four steps: (I) we first prove that the surrogate
function gt (Lt ) converges almost surely; (II) we then prove that the solution difference behaves as
kLt ? Lt?1 kF = O(1/t); (III) based on (II) we show that f (Lt ) ? gt (Lt ) ? 0 almost surely, and
the gradient of f vanishes at the solution Lt when t ? ?; (IV) finally we prove that Lt actually
converges to the optimum solution of the problem (5).
Theorem 2 (Convergence of the surrogate function gt ). Let gt denote the surrogate function defined
in (6). Then, gt (Lt ) converges almost surely when the solution Lt is given by Algorithm 1.
We prove Theorem 2, i.e., the convergence of the stochastic positive process gt (Lt ) > 0, by showing
that it is a quasi-martingale. We first show that the summation of the positive difference of gt (Lt ) is
bounded utilizing the fact that gt (Lt ) upper bounds the empirical cost ft (Lt ) and the loss function
`(zt , Lt ) is Lipschitz. These imply that gt (Lt ) is a quasi-martingale. Applying the lemma from [8]
about the convergence of quasi-martingale, we conclude that gt (Lt ) converges.
Next, we show the difference of the two successive solutions converges to 0 as t goes to infinity.
Theorem 3 (Difference of the solution Lt ). For the two successive solutions obtained from Algorithm 1, we have
kLt+1 ? Lt kF = O(1/t) a.s.
To prove the above result, we first show that the function gt (L) is strictly convex. This holds since the
regularization component ?1 kLk2F naturally guarantees that the eigenvalues of the Hessian matrix
are bounded away from zero. Notice that this is essentially different from [16], where one has to
assume that the smallest eigenvalue of the Hessian matrix is lower bounded. Then we further show
5
that variation of the function gt (L), gt (Lt ) ? gt+1 (Lt ), is Lipschitz if using the updating rule shown
in Algorithm 2. Combining these two properties establishes Theorem 3.
In the third step, we show that the expected cost function f (Lt ) is a smooth one, and the difference
f (Lt ) ? gt (Lt ) goes to zero when t ? ?. In order for showing the regularity of the function f (Lt ),
we first provide the following optimality condition of the loss function `(Lt ).
Lemma 1 (Optimality conditions of Problem (4)). r? ? Rr and e? ? Rp is a solution of Problem (4)
if and only if
C? (z? ? e?? ) = ?2 sign(e?? ),
|C?c (z?c ? e??c )| ? ?2 , otherwise,
r? = (LT L + ?1 I)?1 LT (z ? e? ),
where C = I ? L(LT L + ?1 I)?1 LT and C? denotes the columns of matrix C indexed by ? =
{j|e? [j] 6= 0} and ?c denotes the complementary set of ?. Moreover, the optimal solution is unique.
Based on the above lemma, we can prove that the solution r? and e? are Lipschitz w.r.t. the basis L.
Then, we can obtain the following results about the regularity of the expected cost function f .
Lemma 2. Assume the observations z are always bounded. Define
1
?1
{r? , e? } = arg min kz ? Lr ? ek22 + krk22 + ?2 kek1 .
2
2
r,e
Then, 1) the function ` defined in (4) is continuously differentiable and
?L `(z, L) = (Lr? + e? ? z)r? T ;
2) ?f (L) = Ez [?L `(z, L)]; and 3)?f (L) is Lipschitz.
Equipped with the above regularities of the expected cost function f , we can prove the convergence
of f , as stated in the following theorem.
Theorem 4 (Convergence of f ). Let gt denote the surrogate function defined in (2). Then, 1)
f (Lt ) ? gt (Lt ) converges almost surely to 0; and 2) f (Lt ) converges almost surely, when the
solution Lt is given by Algorithm 1.
Following the techniques developed in [16], we can show the solution obtained from Algorithm 1,
L? , satisfies the first order optimality condition for minimizing the expected cost f (L). Thus the
OR-PCA algorithm provides a solution converging to a stationary point of the expected loss.
Theorem 5. The first order optimal condition for minimizing the objective function in (5) is satisfied
by Lt , the solution provided by Algorithm 1, when t tends to infinity.
Finally, to complete the proof, we establish the following result stating that any full-rank L that
satisfies the first order condition is the global optimal solution.
Theorem 6. When the solution L satisfies the first order condition for minimizing the objective
function in (5) , the obtained solution L is the optimal solution of the problem (5) if L is full rank.
Combining Theorem 5 and Theorem 6 directly yields Theorem 1 ? the solution from Algorithm 1
converges to the optimal solution of Problem (5) asymptotically.
6
Empirical Evaluation
We report some numerical results in this section. Due to space constraints, more results, including
those of subspace tracking, are deferred in the supplementary material.
6.1
Medium-scale Robust PCA
We here evaluate the ability of the proposed OR-PCA of correctly recovering the subspace of corrupted observations, under various settings of the intrinsic subspace dimension and error density. In
particular, we adopt the batch robust PCA method, Principal Component Pursuit [4], as the batch
6
Batch RPCA
Online RPCA
1
0.4
0.8
0.8
0.3
0.3
0.6
?s
0.2
0.2
0.4
0.1
0.1
0.2
0.1
0.2
0.3
rank/n
0.4
(a) Batch RPCA
0.5
0.1
0.2
0.3
rank/n
0.4
0
0
0.5
OR?PCA
Grasta
online PCA
batch RPCA
OR?PCA
Grasta
online PCA
batch RPCA
0.6
0.4
0.2
200
400
600
800
Number of Samples
(c) ?s = 0.1
(b) OR-PCA
E.V.
1
0.4
E.V.
0.5
?s
0.5
1000
0
200
400
600
800
Number of Samples
1000
(d) ?s = 0.3
Figure 1: (a) and (b): subspace recovery performance under different corruption fraction ?s (vertical axis) and rank/n (horizontal axis). Brighter color means better performance; (c) and (d): the
performance comparison of the OR-PCA, Grasta, and online PCA methods against the number of
revealed samples under two different corruption levels ?s with PCP as reference.
counterpart of the proposed OR-PCA method for reference. PCP estimates the subspace in a batch
manner through solving the problem in (1) and outputs the low-rank data matrix. For fair comparison, we follow the data generation scheme of PCP as in [4]: we generate a set of n clean data points
as a product of X = U V T , where the sizes of U and V are p ? r and n ? r respectively. The
elements of both U and V are i.i.d. sampled from the N (0, 1/n) distribution. Here U is the basis of
the subspace and the intrinsic dimension of the subspace spanned by U is r. The observations are
generated through Z = X + E, where E is a sparse matrix with a fraction of ?s non-zero elements.
The elements in E are from a uniform distribution over the interval of [?1000, 1000]. Namely, the
matrix E contains gross but sparse errors.
We run the OR-PCA and the PCP algorithms 10 times under the following settings: the ambient
dimension and number of samples are set as p = 400 and n = 1, 000; the intrinsic rank r of
the subspace varies from 4 to 200; the value of error fraction, ?s , varies from very sparse
? 0.01 to
relatively dense 0.5. The trade-off parameters of OR-PCA are fixed as ?1 = ?2 = 1/ p. The
performance is evaluated by the similarity between the subspace obtained from the algorithms and
the groundtruth. In particular, the similarity is measured by the Expressed Variance (E.V.) (see
definition in [24]). A larger value of E.V. means better subspace recovery.
We plot the averaged E.V. values of PCP and OR-PCA under different settings in a matrix form, as
shown in Figure 1(a) and Figure 1(b) respectively. The results demonstrate that under relatively low
intrinsic dimension (small rank/n) and sparse corruption (small ?s ), OR-PCA is able to recover
the subspace nearly perfectly (E.V.= 1). We also observe that the performance of OR-PCA is
close to that of the PCP. This demonstrates that the proposed OR-PCA method achieves comparable
performance with the batch method and verifies our convergence guarantee on the OR-PCA. In the
relatively difficult setting (high intrinsic dimension and dense error, shown in the top-right of the
matrix), OR-PCA performs slightly worse than the PCP, possibly because the number of streaming
samples is not enough to achieve convergence.
To better demonstrate the robustness of OR-PCA to corruptions and illustrate how the performance
of OR-PCA is improved when more samples are revealed, we plot the performance curve of ORPCA against the number of samples in Figure 1(c), under the setting of p = 400, n = 1, 000,
?s = 0.1, r = 80, and the results are averaged from 10 repetitions. We also apply GRASTA [10] to
solve this RPCA problem in an online fashion as a baseline. The parameters of GRASTA are set as
the values provided in the implementation package provided by the authors. We observe that when
more samples are revealed, both OR-PCA and GRASTA steadily improve the subspace recovery.
However, our proposed OR-PCA converges much faster than GRASTA, possibly because in each
iteration OR-PCA obtains the optimal closed-form solution to the basis updating subproblem while
GRASTA only takes one gradient descent step. Observe from the figure that after 200 samples are
revealed, the performance of OR-PCA is already satisfactory (E.V.> 0.8). However, for GRASTA,
it needs about 400 samples to achieve the same performance. To show the robustness of the proposed OR-PCA, we also plot the performance of the standard online (or incremental) PCA [1] for
comparison. This work focuses on developing online robust PCA. The non-robustness of (online)
PCA is independent of used optimization method. Thus, we only compare with the basic online
PCA method [1], which is enough for comparing robustness. The comparison results are given in
Figure 1(c). We observe that as expected, the online PCA cannot recover the subspace correctly
(E.V.? 0.1), since standard PCA is fragile to gross corruptions. We then increase the corruption
7
level to ?s = 0.3, and plot the performance curve of the above methods in Figure 1(d). From the
plot, it can be observed that the performance of GRASTA decreases severely (E.V.? 0.3) while
OR-PCA still achieves E.V. ? 0.8. The performance of PCP is around 0.88. This result clearly
demonstrates the robustness advantage of OR-PCA over GRASTA. In fact, from other simulation
results under different settings of intrinsic rank and corruption level (see supplementary material),
we observe that the GRASTA breaks down at 25% corruption (the value of E.V. is zero). However,
OR-PCA achieves a performance of E.V.? 0.5, even in presence of 50% outlier corruption.
6.2
Large-scale Robust PCA
We now investigate the computational efficiency of OR-PCA and the performance for large scale
data. The samples are generated following the same model as explained in the above subsection.
The results are provided in Table 1. All of the experiments are implemented in a PC with 2.83GHz
Quad CPU and 8GB RAM. Note that batch RPCA cannot process these data due to out of memory.
Table 1: The comparison of OR-PCA and GRASTA under different settings of sample size (n) and
ambient dimensions (p). Here ?s = 0.3, r = 0.1p. The corresponding computational time (in ?103
seconds) is shown in the top row and the E.V. values are shown in the bottom row correspondingly.
The results are based on the average of 5 repetitions and the variance is shown in the parentheses.
p
n
OR-PCA
GRASTA
1 ? 106
0.013(0.0004)
0.99(0.01)
0.023(0.0008)
0.54(0.08)
1 ? 103
1 ? 108
1.312(0.082)
0.99(0.00)
2.137(0.016)
0.55(0.02)
1 ? 1010
139.233(7.747)
0.99(0.00)
240.271(7.564)
0.57(0.03)
1 ? 104
1 ? 106
1 ? 108
0.633(0.047)
15.910(2.646)
0.82(0.09)
0.82(0.01)
2.514(0.011) 252.630(2.096)
0.45(0.02)
0.46(0.03)
From the above results, we observe that OR-PCA is much more efficient and performs better than
GRASTA. In fact, the computational time of OR-PCA is linear in the sample size and nearly linear
in the ambient dimension. When the ambient dimension is large (p = 1 ? 104 ), OR-PCA is more efficient than GRASTA with an order magnitude efficiency enhancement. We then compare OR-PCA
with batch PCP. In each iteration, batch PCP needs to perform an SVD plus a thresholding operation,
whose complexity is O(np2 ). In contrast, for OR-PCA, in each iteration, the computational cost is
O(pr2 ), which is independent of the sample size and linear in the ambient dimension. To see this,
note that in step 2) of Algorithm 1, the computation complexity is O(r2 + pr + r3 ). Here O(r3 ) is
for computing LT L. The complexity of step 3) is O(r2 + pr). For step 4) (i.e., Algorithm 2), the
cost is O(pr2 ) (updating each column of L requires O(pr) and there are r columns in total). Thus
the total complexity is O(r2 + pr + r3 + pr2 ). Since p r, the overall complexity is O(pr2 ).
The memory cost is significantly reduced too. The memory required for OR-PCA is O(pr), which
is independent of the sample size. This is much smaller than the memory cost of the batch PCP
algorithm (O(pn)), where n p for large scale dataset. This is quite important for processing big
data. The proposed OR-PCA algorithm can be easily parallelized to further enhance its efficiency.
7
Conclusions
In this work, we develop an online robust PCA (OR-PCA) method. Different from previous batch
based methods, the OR-PCA need not ?remember? all the past samples and achieves much higher
storage efficiency. The main idea of OR-PCA is to reformulate the objective function of PCP (a
widely applied batch RPCA algorithm) by decomposing the nuclear norm to an explicit product of
two low-rank matrices, which can be solved by a stochastic optimization algorithm. We provide the
convergence analysis of the OR-PCA method and show that OR-PCA converges to the solution of
batch RPCA asymptotically. Comprehensive simulations demonstrate the effectiveness of OR-PCA.
Acknowledgments
J. Feng and S. Yan are supported by the Singapore National Research Foundation under its International Research Centre @Singapore Funding Initiative and administered by the IDM Programme
Office. H. Xu is partially supported by the Ministry of Education of Singapore through AcRF Tier
Two grant R-265-000-443-112 and NUS startup grant R-265-000-384-133.
8
References
[1] M. Artac, M. Jogan, and A. Leonardis. Incremental pca for on-line visual learning and recognition. In Pattern Recognition, 2002. Proceedings. 16th International Conference on, volume 3,
pages 781?784. IEEE, 2002.
[2] D.P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
[3] Samuel Burer and Renato Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Progam., 2003.
[4] E.J. Candes, X. Li, Y. Ma, and J. Wright.
Robust principal component analysis?
ArXiv:0912.3599, 2009.
[5] V. Chandrasekaran, S. Sanghavi, P.A. Parrilo, and A.S. Willsky. Rank-sparsity incoherence for
matrix decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
[6] M. Fazel. Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford
University, 2002.
[7] J. Feng, H. Xu, and S. Yan. Robust PCA in high-dimension: A deterministic approach. In
ICML, 2012.
[8] D.L. Fisk. Quasi-martingales. Transactions of the American Mathematical Society, 1965.
[9] N. Guan, D. Tao, Z. Luo, and B. Yuan. Online nonnegative matrix factorization with robust
stochastic approximation. Neural Networks and Learning Systems, IEEE Transactions on,
23(7):1087?1099, 2012.
[10] Jun He, Laura Balzano, and John Lui. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011.
[11] P.J. Huber, E. Ronchetti, and MyiLibrary. Robust statistics. John Wiley & Sons, New York,
1981.
[12] M. Hubert, P.J. Rousseeuw, and K.V. Branden. Robpca: a new approach to robust principal
component analysis. Technometrics, 2005.
[13] Y. Li. On incremental and robust subspace learning. Pattern recognition, 2004.
[14] Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of
corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
[15] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms
for exact recovery of a corrupted low-rank matrix. Computational Advances in Multi-Sensor
Adaptive Processing (CAMSAP), 2009.
[16] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse
coding. JMLR, 2010.
[17] Morteza Mardani, Gonzalo Mateos, and G Giannakis. Dynamic anomalography: Tracking
network anomalies via sparsity and low rank. 2012.
[18] Morteza Mardani, Gonzalo Mateos, and Georgios B Giannakis. Rank minimization for subspace tracking from incomplete data. In ICASSP, 2013.
[19] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical
Magazine, 1901.
[20] C. Qiu, N. Vaswani, and L. Hogben. Recursive robust pca or recursive sparse recovery in large
but structured noise. arXiv preprint arXiv:1211.3754, 2012.
[21] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[22] Jasson Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In ICML, 2005.
[23] Pablo Sprechmann, Alex M Bronstein, and Guillermo Sapiro. Learning efficient sparse and
low rank models. arXiv preprint arXiv:1212.3631, 2012.
[24] H. Xu, C. Caramanis, and S. Mannor. Principal component analysis with contaminated data:
The high dimensional case. In COLT, 2010.
[25] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. Information Theory,
IEEE Transactions on, 58(5):3047?3064, 2012.
9
| 5131 |@word mild:1 version:1 briefly:1 norm:16 shuicheng:1 simulation:4 covariance:1 decomposition:4 accounting:1 ronchetti:1 tr:2 klk:1 reduction:1 initial:1 mpexuh:1 contains:1 ours:1 past:1 recovered:2 current:2 comparing:1 luo:1 pcp:20 john:2 fn:2 realistic:2 additive:1 numerical:1 plot:5 update:5 stationary:3 plane:1 lr:5 provides:3 math:1 mannor:1 successive:2 mathematical:1 along:2 initiative:1 yuan:1 consists:1 prove:9 manner:7 alm:1 huber:1 expected:8 indeed:2 behavior:1 camsap:1 multi:1 inspired:1 pitfall:2 cpu:1 quad:1 equipped:1 considering:1 solver:1 provided:9 project:2 moreover:4 underlying:3 estimating:1 notation:1 bounded:7 medium:1 kind:1 minimizes:1 developed:3 compressive:2 guarantee:7 sapiro:2 remember:1 every:1 ti:2 tackle:1 exactly:2 demonstrates:2 grant:2 arguably:1 bertsekas:1 before:1 positive:2 local:1 tends:1 limit:2 severely:2 incoherence:1 becoming:1 plus:3 dynamically:1 factorization:12 vaswani:1 averaged:2 fazel:2 unique:1 camera:1 acknowledgment:1 practice:1 recursive:3 block:1 progam:1 procedure:1 ek22:3 empirical:5 yan:3 significantly:2 revealing:2 jasson:1 pre:1 cannot:5 onto:1 close:1 storage:3 applying:2 seminal:1 influence:1 optimize:1 equivalent:4 deterministic:1 lagrangian:2 pursues:1 go:2 convex:5 recovery:6 hogben:1 rule:1 utilizing:1 importantly:1 nuclear:13 spanned:1 handle:2 coordinate:1 variation:1 updated:2 magazine:1 exact:2 programming:2 anomaly:1 us:1 jogan:1 element:3 recognition:3 updating:5 breakdown:1 observed:7 ft:4 subproblem:3 bottom:1 preprint:4 solved:2 trade:1 contamination:2 decrease:1 mentioned:1 intuition:1 gross:3 vanishes:1 complexity:5 krk22:3 dynamic:4 solving:9 efficiency:7 basis:16 completely:1 easily:1 icassp:1 various:1 caramanis:2 fast:2 startup:1 pearson:1 quite:3 whose:2 widely:2 solve:5 supplementary:4 larger:1 stanford:1 otherwise:1 rennie:1 balzano:1 ability:1 statistic:1 jointly:1 final:1 online:33 sequence:1 advantage:3 rr:3 eigenvalue:2 differentiable:1 propose:2 coming:1 product:2 combining:3 achieve:2 frobenius:1 scalability:1 convergence:10 regularity:3 optimum:3 enhancement:1 incremental:8 converges:15 leave:1 illustrate:1 develop:3 stating:1 fixing:1 measured:1 solves:1 recovering:4 implemented:5 involves:1 closely:1 correct:1 stochastic:10 material:4 implementing:1 education:1 require:1 investigation:1 summation:1 strictly:1 hold:1 around:1 considered:2 wright:2 algorithmic:1 bj:1 substituting:1 major:1 dictionary:4 achieves:5 adopt:2 smallest:1 estimation:7 rpca:19 individually:1 repetition:2 establishes:1 weighted:2 minimization:6 clearly:1 sensor:1 always:3 aim:1 pn:1 shelf:1 office:1 np2:1 l0:1 focus:1 klt:2 ponce:1 eleyans:1 rank:30 contrast:3 lri:1 baseline:1 benefitting:1 streaming:1 typically:1 bt:3 lj:2 quasi:4 interested:1 tao:1 provably:1 lrt:3 arg:3 overall:5 colt:1 monteiro:1 proposes:4 saving:1 icml:2 nearly:2 future:1 report:1 sanghavi:2 contaminated:1 few:2 simultaneously:1 national:4 comprehensive:3 tightly:2 technometrics:1 detection:1 huge:1 investigate:1 evaluation:1 deferred:2 semidefinite:1 pc:1 hubert:1 ambient:6 partial:1 huan:1 decoupled:1 conduct:1 iv:1 indexed:1 incomplete:1 re:5 e0:1 instance:5 column:6 modeling:1 ar:1 zn:1 cost:17 uniform:1 jiashi:2 too:2 varies:2 corrupted:7 proximal:1 recht:1 density:1 international:2 siam:2 off:2 enhance:2 continuously:1 thesis:2 satisfied:1 pr2:4 possibly:2 worse:1 american:1 inefficient:1 laura:1 return:2 li:2 parrilo:2 coding:2 bold:1 summarized:1 coefficient:5 explicitly:2 piece:1 break:1 closed:1 recover:3 candes:1 timely:1 contribution:1 minimize:2 collaborative:1 ni:1 branden:1 kek:2 who:1 efficiently:4 variance:2 yield:1 accurately:1 corruption:15 published:1 explain:1 definition:1 against:2 steadily:1 naturally:3 proof:5 associated:1 static:1 couple:2 sampled:1 newly:1 proved:1 dataset:1 popular:1 lim:1 color:1 dimensionality:1 ubiquitous:1 subsection:1 actually:1 higher:2 restarts:1 follow:1 improved:1 formulation:5 evaluated:1 though:3 sketch:2 horizontal:1 web:1 ei:2 nonlinear:2 ganesh:1 acrf:1 aj:2 reveal:1 scientific:1 validity:1 k22:2 multiplier:2 counterpart:2 hence:4 regularization:2 satisfactory:1 during:1 samuel:1 grasta:20 prominent:1 theoretic:2 demonstrate:5 complete:1 performs:2 l1:1 image:2 recently:1 funding:1 behaves:1 volume:1 extend:1 he:2 kri:1 centre:1 gonzalo:2 moving:1 access:2 similarity:2 gt:19 closest:1 recent:3 inf:1 seen:4 minimum:2 ministry:1 r0:1 surely:5 converge:2 determine:1 parallelized:1 ii:2 full:4 smooth:1 technical:1 faster:1 calculation:1 burer:1 long:2 lin:2 bach:1 a1:1 parenthesis:1 converging:2 variant:1 basic:1 prediction:1 enhancing:1 essentially:4 expectation:1 arxiv:9 iteration:7 background:3 addition:1 separately:2 interval:1 singular:1 isolate:1 effectiveness:1 presence:3 revealed:5 iii:1 enough:2 fit:1 zi:3 brighter:1 lasso:1 perfectly:1 idea:3 br:1 absent:1 fragile:1 administered:1 pca:96 gb:1 reformulated:1 hessian:2 york:1 clear:1 rousseeuw:1 processed:2 reduced:1 generate:1 singapore:6 notice:1 sign:1 estimated:2 per:3 correctly:2 four:1 reformulation:1 authentic:1 capital:1 changing:2 clean:1 ram:1 asymptotically:3 fraction:3 sum:1 run:1 package:1 letter:2 named:1 almost:5 chandrasekaran:1 groundtruth:1 wu:1 comparable:1 abnormal:1 apg:1 bound:2 renato:1 guaranteed:1 nonnegative:3 adapted:1 constraint:3 infinity:2 alex:1 ri:1 nathan:1 robustifying:1 min:7 optimality:3 relatively:3 department:3 developing:1 structured:1 combination:1 across:1 slightly:1 smaller:1 son:1 giannakis:2 memorizing:1 outlier:4 gradually:3 explained:1 pr:5 taken:2 tier:1 equation:1 previously:1 discus:1 r3:3 fail:1 needed:1 sprechmann:1 end:2 adopted:1 pursuit:4 decomposing:3 available:1 operation:1 apply:2 observe:6 away:1 evolutional:1 robustly:2 batch:27 robustness:6 alternative:1 rp:16 original:2 denotes:5 top:2 restrictive:1 k1:1 establish:3 society:1 feng:3 objective:6 added:1 already:1 rt:2 traditional:1 surrogate:7 gradient:7 subspace:38 grassmannian:1 restart:1 athena:1 me:1 manifold:1 kekf:1 idm:1 willsky:1 besides:1 index:1 reformulate:1 minimizing:5 setup:1 unfortunately:1 difficult:1 negative:1 stated:1 implementation:2 bronstein:1 zt:4 perform:3 upper:3 vertical:1 observation:6 finite:1 descent:5 situation:1 frame:1 rn:4 rtt:3 sharp:2 arbitrary:1 pablo:1 namely:2 required:1 z1:2 optimized:2 philosophical:1 established:1 nu:4 able:2 leonardis:1 proceeds:1 below:2 usually:1 pattern:2 sparsity:2 program:1 kek1:6 including:1 memory:7 video:3 event:1 difficulty:3 warm:2 natural:1 residual:1 scheme:1 improve:1 mateos:2 imply:1 axis:2 jun:1 nice:1 sg:3 review:1 kf:2 multiplication:1 georgios:1 loss:7 brittle:1 generation:1 srebro:1 foundation:1 integrate:1 incurred:1 klk2f:5 thresholding:1 row:2 changed:1 guillermo:1 supported:2 explaining:1 correspondingly:1 sparse:26 ghz:1 overcome:1 dimension:12 curve:2 cumulative:1 kz:4 author:1 adaptive:1 kei:1 programme:1 kzi:2 transaction:3 obtains:1 dealing:1 global:4 sequentially:1 mairal:1 b1:1 assumed:1 conclude:1 xi:1 iterative:3 decomposes:1 table:2 robust:34 obtaining:1 krk:1 main:6 dense:2 big:5 noise:13 qiu:1 verifies:1 fair:1 complementary:1 xu:5 augmented:3 fashion:3 martingale:4 wiley:1 explicit:2 kzt:1 guan:1 jmlr:1 third:1 removing:1 theorem:15 down:1 load:1 specific:1 xt:1 showing:2 sensing:2 r2:3 intrinsic:7 incorporating:1 phd:2 magnitude:1 margin:1 chen:2 morteza:2 lt:46 eij:1 ez:2 visual:1 prevents:1 kxk:2 expressed:1 lagrange:1 tracking:9 partially:1 corresponds:1 satisfies:3 ma:3 formulated:2 towards:1 lipschitz:4 specifically:2 typical:1 lui:1 principal:8 lemma:4 total:2 ece:2 accepted:1 svd:2 latter:1 accelerated:1 evaluate:1 ek2f:3 |
4,567 | 5,132 | The Fast Convergence of Incremental PCA
Akshay Balsubramani
UC San Diego
[email protected]
Sanjoy Dasgupta
UC San Diego
[email protected]
Yoav Freund
UC San Diego
[email protected]
Abstract
We consider a situation in which we see samples Xn ? Rd drawn i.i.d. from some
distribution with mean zero and unknown covariance A. We wish to compute the
top eigenvector of A in an incremental fashion - with an algorithm that maintains
an estimate of the top eigenvector in O(d) space, and incrementally adjusts the
estimate with each new data point that arrives. Two classical such schemes are
due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates
for both.
1
Introduction
Principal component analysis (PCA) is a popular form of dimensionality reduction that projects a
data set on the top eigenvector(s) of its covariance matrix. The default method for computing these
eigenvectors uses O(d2 ) space for data in Rd , which can be prohibitive in practice. It is therefore
of interest to study incremental schemes that take one data point at a time, updating their estimates
of the desired eigenvectors with each new point. For computing one eigenvector, such methods use
O(d) space.
For the case of the top eigenvector, this problem has long been studied, and two elegant solutions
were obtained by Krasulina [7] and Oja [9]. Their methods are closely related. At time n ? 1, they
have some estimate Vn?1 ? Rd of the top eigenvector. Upon seeing the next data point, Xn , they
update this estimate as follows:
V T Xn XnT Vn?1
Vn = Vn?1 + ?n Xn XnT ? n?1
I
(Krasulina)
d Vn?1
kVn?1 k2
Vn =
Vn?1 + ?n Xn XnT Vn?1
kVn?1 + ?n Xn XnT Vn?1 k
(Oja)
Here ?n is a ?learning rate? that is typically proportional to 1/n.
Suppose the points X1 , X2 , . . . are drawn i.i.d. from a distribution on Rd with mean zero and covariance matrix A. The original papers proved that these estimators converge almost surely to the
top eigenvector of A (call it v ? ) under mild conditions:
P
P 2
?
n ?n = ? while
n ?n < ?.
? If ?1 , ?2 denote the top two eigenvalues of A, then ?1 > ?2 .
? EkXn kk < ? for some suitable k (for instance, k = 8 works).
There are also other incremental estimators for which convergence has not been established; see, for
instance, [12] and [16].
In this paper, we analyze the rate of convergence of the Krasulina and Oja estimators. They can
be treated in a common framework, as stochastic approximation algorithms for maximizing the
1
Rayleigh quotient
v T Av
.
vT v
The maximum value of this function is ?1 , and is achieved at v ? (or any nonzero multiple thereof).
The gradient is
v T Av
2
A
?
I
v.
?G(v) =
d
kvk2
vT v
G(v) =
Since EXn XnT = A, we see that Krasulina?s method is stochastic gradient descent. The Oja procedure is closely related: as pointed out in [10], the two are identical to within second-order terms.
Recently, there has been a lot of work on rates of convergence for stochastic gradient descent (for instance, [11]), but this has typically been limited to convex cost functions. These results do not apply
to the non-convex Rayleigh quotient, except at the very end, when the system is near convergence.
Most of our analysis focuses on the buildup to this finale.
We measure the quality of the solution Vn at time n using the potential function
?n = 1 ?
(Vn ? v ? )2
,
kVn k2
where v ? is taken to have unit norm. This quantity lies in the range [0, 1], and we are interested in
the rate at which it approaches zero. The result, in brief, is that E[?n ] = O(1/n), under conditions
that are similar to those above, but stronger. In particular, we require that ?n be proportional to 1/n
and that kXn k be bounded.
1.1
The algorithm
We analyze the following procedure.
1. Set starting time. Set the clock to time no .
2. Initialization. Initialize Vno uniformly at random from the unit sphere in Rd .
3. For time n = no + 1, no + 2, . . .:
(a) Receive the next data point, Xn .
(b) Update step. Perform either the Krasulina or Oja update, with ?n = c/n.
The first step is similar to using a learning rate of the form ?n = c/(n + no ), as is often done in
stochastic gradient descent implementations [1]. We have adopted it because the initial sequence of
updates is highly noisy: during this phase Vn moves around wildly, and cannot be shown to make
progress. It becomes better behaved when the step size ?n becomes smaller, that is to say when n
gets larger than some suitable no . By setting the start time to no , we can simply fast-forward the
analysis to this moment.
1.2
Initialization
One possible initialization is to set Vno to the first data point that arrives, or to the average of a few
data points. This seems sensible enough, but can fail dramatically in some situations.
Here is an example. Suppose X can take on just 2d possible values: ?e1 , ??e2 , . . . , ??ed , where
the ei are coordinate directions and 0 < ? < 1 is a small constant. Suppose further that the
distribution of X is specified by a single positive number p < 1:
p
Pr(X = e1 ) = Pr(X = ?e1 ) =
2
1?p
Pr(X = ?ei ) = Pr(X = ??ei ) =
for i > 1
2(d ? 1)
Then X has mean zero and covariance diag(p, ? 2 (1 ? p)/(d ? 1), . . . , ? 2 (1 ? p)/(d ? 1)). We will
assume that p and ? are chosen so that p > ? 2 (1 ? p)/(d ? 1); in our notation, the top eigenvalues
are then ?1 = p and ?2 = ? 2 (1 ? p)/(d ? 1), and the target vector is v ? = e1 .
2
If Vn is ever orthogonal to some ei , it will remain so forever. This is because both the Krasulina and
Oja updates have the following properties:
Vn?1 ? Xn = 0
Vn?1 ? Xn 6= 0
=?
=?
Vn = Vn?1
Vn ? span(Vn?1 , Xn ).
If Vno is initialized to a random data point, then with probability 1 ? p, it will be assigned to some
ei with i > 1, and will converge to a multiple of that same ei rather than to e1 . Likewise, if it is
initialized to the average of ? 1/p data points, then with constant probability it will be orthogonal
to e1 and remain so always.
Setting Vno to a random unit vector avoids this problem. However, there are doubtless cases, for
instance when the data has intrinsic dimension d, in which a better initializer is possible.
1.3
The setting of the learning rate
In order to get a sense of what rates of convergence we might expect, let?s return to the example of a
random vector X with 2d possible values. In the Oja update Vn = Vn?1 + ?n Xn XnT Vn?1 , we can
ignore normalization if we are merely interested in the progress of the potential function ?n . Since
the Xn correspond to coordinate directions, each update changes just one coordinate of V :
Xn = ?e1 =? Vn,1 = Vn?1,1 (1 + ?n )
Xn = ??ei =? Vn,i = Vn?1,i (1 + ? 2 ?n )
Recall that we initialize Vno to a random vector from the unit sphere. For simplicity, let?s just
suppose that no = 0 and that this initial value is the all-ones vector (again, we don?t have to worry
about normalization). On each iteration the first coordinate is updated with probability exactly
p = ?1 , and thus
E[Vn,1 ] = (1 + ?1 ?1 )(1 + ?1 ?2 ) ? ? ? (1 + ?1 ?n ) ? exp(?1 (?1 + ? ? ? + ?n )) ? nc?1
since ?n = c/n. Likewise, for i > 1,
E[Vn,i ] = (1 + ?2 ?1 )(1 + ?2 ?2 ) ? ? ? (1 + ?2 ?n ) ? nc?2 .
If all goes according to expectation, then at time n,
?n = 1 ?
2
Vn,1
n2c?1
d?1
? 1 ? 2c?1
? 2c(? ?? ) .
2
1
2
kVn k
n
+ (d ? 1)n2c?2
n
(This is all very rough, but can be made precise by obtaining concentration bounds for ln Vn,i .)
From this, we can see that it is not possible to achieve a O(1/n) rate unless c ? 1/(2(?1 ? ?2 )).
Therefore, we will assume this when stating our final results, although most of our analysis is in
terms of general ?n . An interesting practical question, to which we do not have an answer, is how
one would empirically set c without prior knowledge of the eigenvalue gap.
1.4
Nested sample spaces
For n ? no , let Fn denote the sigma-field of all outcomes up to and including time n: Fn =
?(Vno , Xno +1 , . . . , Xn ). We start by showing that
E[?n |Fn?1 ] ? ?n?1 (1 ? 2?n (?1 ? ?2 )(1 ? ?n?1 )) + O(?n2 ).
Initially ?n is likely to be close to 1. For instance, if the initial Vno is picked uniformly at random
from the surface of the unit sphere in Rd , then we?d expect ?no ? 1 ? 1/d. This means that the
initial rate of decrease is very small, because of the (1 ? ?n?1 ) term.
To deal with this, we divide the analysis into epochs: the first takes ?n from 1 ? 1/d to 1 ? 2/d, the
second from 1?2/d to 1?4/d, and so on until ?n finally drops below 1/2. We use martingale large
deviation bounds to bound the length of each epoch, and also to argue that ?n does not regress. In
particular, we establish a sequence of times nj such that (with high probability)
sup ?n ? 1 ?
n?nj
3
2j
.
d
(1)
The analysis of each epoch uses martingale arguments, but at the same time, assumes that ?n remains bounded above. Combining the two requires a careful specification of the sample space at
each step. Let ? denote the sample space of all realizations (vno , xno +1 , xno +2 , . . .), and P the
probability distribution on these sequences. For any ? > 0, we define a nested sequence of spaces
? ? ?0no ? ?0no +1 ? ? ? ? such that each ?0n is Fn?1 -measurable, has probability P (?0n ) ? 1 ? ?,
and moreover consists exclusively of realizations ? ? ? that satisfy the constraints (1) up to and
including time n ? 1. We can then build martingale arguments by restricting attention to ?0n when
computing the conditional expectations of quantities at time n.
1.5
Main result
We make the following assumptions:
(A1)
(A2)
(A3)
(A4)
The Xn ? Rd are i.i.d. with mean zero and covariance A.
There is a constant B such that kXn k2 ? B.
The eigenvalues ?1 ? ?2 ? ? ? ? ? ?d of A satisfy ?1 > ?2 .
The step sizes are of the form ?n = c/n.
Under these conditions, we get the following rate of convergence for the Krasulina update.
Theorem 1.1. There are absolute constants Ao , A1 > 0 and 1 < a < 4 for which the following
holds. Pick any 0 < ? < 1, and any co > 2. Set the step sizes to ?n = c/n, where c = co /(2(?1 ?
?2 )), and set the starting time to no ? (Ao B 2 c2 d2 /? 4 ) ln(1/?). Then there is a nested sequence of
subsets of the sample space ? ? ?0no ? ?0no +1 ? ? ? ? such that for any n ? no , we have:
En
P (?0n ) ? 1 ? ? and
c /2
2 2 co /no
a
(Vn ? v ? )2
1
c B e
d
no + 1 o
? A1
,
?1?
kVn k2
2(co ? 2)
n+1
?2
n+1
where En denotes expectation restricted to ?0n .
Since co > 2, this bound is of the form En [?n ] = O(1/n).
The result above also holds for the Oja update up to absolute constants.
We also remark that a small modification to the final step in the proof of the above yields a rate
of En [?n ] = O(n?a ), with an identical definition of En [?n ]. The details are in the proof, in
Appendix D.2.
1.6
Related work
There is an extensive line of work analyzing PCA from the statistical perspective, in which the convergence of various estimators is characterized under certain conditions, including generative models
of the data [5] and various assumptions on the covariance matrix spectrum [14, 4] and eigenvalue
spacing [17]. Such works do provide finite-sample guarantees, but they apply only to the batch case
and/or are computationally intensive, rather than considering an efficient incremental algorithm.
Among incremental algorithms, the work of Warmuth and Kuzmin [15] describes and analyzes
worst-case online PCA, using an experts-setting algorithm with a super-quadratic per-iteration cost.
More efficient general-purpose incremental PCA algorithms have lacked finite-sample analyses [2].
There have been recent attempts to remedy this situation by relaxing the nonconvexity inherent in
the problem [3] or making generative assumptions [8]. The present paper directly analyzes the oldest
known incremental PCA algorithms under relatively mild assumptions.
2
Outline of proof
We now sketch the proof of Theorem 1.1; almost all the details are relegated to the appendix.
Recall that for n ? no , we take Fn to be the sigma-field of all outcomes up to and including time n,
that is, Fn = ?(Vno , Xno +1 , . . . , Xn ).
4
An additional piece of notation: we will use u
b to denote u/kuk, the unit vector in the direction of
u ? Rd . Thus, for instance, the Rayleigh quotient can be written G(v) = vbT Ab
v.
2.1
Expected per-step change in potential
We first bound the expected improvement in ?n in each step of the Krasulina or Oja algorithms.
Theorem 2.1. For any n > no , we can write ?n ? ?n?1 + ?n ? Zn , where
2 2
?n B /4
(Krasulina)
?n =
5?n2 B 2 + 2?n3 B 3 (Oja)
and where Zn is a Fn -measurable random variable with the following properties:
? E[Zn |Fn?1 ] = 2?n (Vbn?1 ? v ? )2 (?1 ? G(Vn?1 )) ? 2?n (?1 ? ?2 )?n?1 (1 ? ?n?1 ) ? 0.
? |Zn | ? 4?n B.
The theorem follows from Lemmas ?? and ?? in the appendix. Its characterization of the two
estimators is almost identical, and for simplicity we will henceforth deal only with Krasulina?s
estimator. All the subsequent results hold also for Oja?s method, up to constants.
2.2
A large deviation bound for ?n
We know from Theorem 2.1 that ?n ? ?n?1 + ?n ? Zn , where ?n is non-stochastic and Zn is
a quantity of positive expected value. Thus, in expectation, and modulo a small additive term, ?n
decreases monotonically. However, the amount of decrease at the nth time step can be arbitrarily
small when ?n is close to 1. Thus, we need to show that ?n is eventually bounded away from 1,
i.e. there exists some o > 0 and some time no such that for any n ? no , we have ?n ? 1 ? o .
Recall from the algorithm specification that we advance the clock so as to skip the pre-no phase.
Given this, what can we expect o to be? If the initial estimate Vno?is a random unit vector, then
E[?no ] = 1 ? 1/d and, roughly speaking, Pr(?no > 1 ? /d) = O( ). If no is sufficiently large,
then ?n may subsequently increase a little bit, but not by very much. In this section, we establish
the following bound.
Theorem 2.2. Suppose the initial estimate Vno is chosen uniformly at random from the surface of
the unit sphere in Rd . Assume also that the step sizes are of the form ?n = c/n, for some constant
c > 0. Then for any 0 < < 1, if no ? 2B 2 c2 d2 /2 , we have
?
Pr sup ?n ? 1 ?
? 2e.
d
n?no
To prove this, we start with a simple recurrence for the moment-generating function of ?n .
Lemma 2.3. Consider a filtration (Fn ) and random variables Yn , Zn ? Fn such that there are two
sequences of nonnegative constants, (?n ) and (?n ), for which:
? Yn ? Yn?1 + ?n ? Zn .
? Each Zn takes values in an interval of length ?n .
Then for any t > 0, we have E[etYn |Fn?1 ] ? exp(t(Yn?1 ? E[Zn |Fn?1 ] + ?n + t?n2 /8)).
This relation shows how to define a supermartingale based on etYn , from which we can derive a
large deviation bound on Yn .
Lemma 2.4. Assume the conditions of Lemma 2.3, and also that E[Zn |Fn?1 ] ? 0. Then, for any
integer m and any ?, t > 0,
X
Pr sup Yn ? ? ? E[etYm ] exp ? t ? ?
(?` + t?`2 /8) .
n?m
`>m
5
In order to apply this to the sequence (?n ), we need to first calculate the moment-generating function of its starting value ?no .
Lemma 2.5. Suppose a vector V is picked uniformly at random from the surface of the unit sphere
in Rd , where d ? 3. Define Y = 1 ? (V12 )/kV k2 . Then, for any t > 0,
r
d?1
tY
t
Ee ? e
.
2t
Putting these pieces together yields Theorem 2.2.
2.3
Intermediate epochs of improvement
We have seen that, for suitable and no , it is likely that ?n ? 1 ? /d for all n ? no . We now
define a series of epochs in which 1 ? ?n successively doubles, until ?n finally drops below 1/2.
To do this, we specify intermediate goals (no , o ), (n1 , 1 ), (n2 , 2 ), . . . , (nJ , J ), where no < n1 <
? ? ? < nJ and o < 1 < ? ? ? < J = 1/2, with the intention that:
For all 0 ? j ? J, we have sup ?n ? 1 ? j .
(2)
n?nj
Of course, this can only hold with a certain probability.
Let ? denote the sample space of all realizations (vno , xno +1 , xno +2 , . . .), and P the probability
distribution on these sequences. We will show that, for a certain choice of {(nj , j )}, all J + 1
constraints (2) can be met by excluding just a small portion of ?.
We consider a specific realization ? ? ? to be good if it satisfies (2). Call this set ?0 :
?0 = {? ? ? : sup ?n (?) ? 1 ? j for all 0 ? j ? J}.
n?nj
For technical reasons, we also need to look at realizations that are good up to time n?1. Specifically,
for each n, define
?0n = {? ? ? : sup ?` (?) ? 1 ? j for all 0 ? j ? J}.
nj ?`<n
Crucially, this is Fn?1 -measurable. Also note that ?0 =
T
n>no
?0n .
We can talk about expectations under the distribution P restricted to subsets of ?. In particular, let
Pn be the restriction of P to ?0n ; that is, for any A ? ?, we have Pn (A) = P (A ? ?0n )/P (?0n ). As
for expectations with respect to Pn , for any function f : ? ? R, we define
Z
1
En f =
f (?)P (d?).
P (?0n ) ?0n
Here is the main result of this section.
Theorem 2.6. Assume that ?n = c/n, where c = co /(2(?1 ? ?2 )) and co > 0. Pick any 0 < ? < 1
and select a schedule (no , o ), . . . , (nJ , J ) that satisfies the conditions
o =
?2
8ed ,
and
3
2 j
? j+1 ? 2j for 0 ? j < J, and J?1 ?
1
4
(3)
(nj+1 + 1) ? e5/co (nj + 1) for 0 ? j < J
as well as no ? (20c2 B 2 /2o ) ln(4/?). Then Pr(?0 ) ? 1 ? ?.
The first step towards proving this theorem is bounding the moment-generating function of ?n in
terms of that of ?n?1 .
Lemma 2.7. Suppose n > nj . Suppose also that ?n = c/n, where c = co /(2(?1 ? ?2 )). Then for
any t > 0,
2 2
h
c B t(1 + 32t)
co j i
exp
.
En [et?n ] ? En exp t?n?1 1 ?
n
4n2
6
We would like to use this result to bound En [?n ] in terms of Em [?m ] for m < n. The shift in
sample spaces is easily handled using the following observation.
Lemma 2.8. If g : R ? R is nondecreasing, then En [g(?n?1 )] ? En?1 [g(?n?1 )] for any n > no .
A repeated application of Lemmas 2.7 and 2.8 yields the following.
Lemma 2.9. Suppose that conditions (3) hold. Then for 0 ? j < J and any t > 0,
tc2 B 2 (1 + 32t) 1
1
t?nj+1
Enj+1 [e
] ? exp t(1 ? j+1 ) ? tj +
?
.
4
nj
nj+1
Now that we have bounds on the moment-generating functions of intermediate ?n , we can apply
martingale deviation bounds, as in Lemma 2.4, to obtain the following, from which Theorem 2.6
ensues.
Lemma 2.10. Assume conditions (3) hold. Pick any 0 < ? < 1, and set no ? (20c2 B 2 /2o ) ln(4/?).
Then
!
J
X
?
Pnj sup ?n > 1 ? j ? .
2
n?nj
j=1
2.4
The final epoch
Recall the definition of the intermediate goals (nj , j ) in (2), (3). The final epoch is the period
n ? nJ , at which point ?n ? 1/2. The following consequence of Lemmas ?? and 2.8 captures the
rate at which ? decreases during this phase.
Lemma 2.11. For all n > nJ ,
En [?n ] ? (1 ? ?n )En?1 [?n?1 ] + ?n ,
where ?n = (?1 ? ?2 )?n and ?n = (B 2 /4)?n2 .
By solving this recurrence relation, and piecing together the various epochs, we get the overall
convergence result of Theorem 1.1.
Note that Lemma 2.11 closely resembles the recurrence relation followed by the squared L2 distance
from the optimum of stochastic gradient descent (SGD) on a strongly convex function [11]. As
?n ? 0, the incremental PCA algorithms we study have convergence rates of the same form as
SGD in this scenario.
3
Experiments
When performing PCA in practice with massive d and a large/growing dataset, an incremental
method like that of Krasulina or Oja remains practically viable, even as quadratic-time and -memory
algorithms become increasingly impractical. Arora et al. [2] have a more complete discussion of
the empirical necessity of incremental PCA algorithms, including a version of Oja?s method which
is shown to be extremely competitive in practice.
Since the efficiency benefits of these types of algorithms are well understood, we now instead focus
on the effect of the learning rate on the performance of Oja?s algorithm (results for Krasulina?s are
extremely similar). We use the CMU PIE faces [13], consisting of 11554 images of size 32 ? 32,
as a prototypical example of a dataset with most of its variance captured by a few PCs, as shown in
Fig. 1. We set n0 = 0.
We expect from Theorem 1.1 and the discussion in the introduction that varying c (the constant in
the learning rate) will influence the overall rate of convergence. In particular, if c is low, then halving
it can be expected to halve the exponent of n, and the slope of the log-log convergence graph (ref.
the remark after Thm. 1.1). This is exactly what occurs in practice, as illustrated in Fig. 2. The
dotted line in that figure is a convergence rate of 1/n, drawn as a guide.
7
PIE Dataset Covariance Spectrum
Oja Subspace Rule Dependence on c
0
10
5000
4500
?1
10
4000
?2
Reconstruction Error
Eigenvalue
3500
3000
2500
2000
10
?3
10
?4
10
c=6
c=3
c=1.5
c=1
c=0.666
c=0.444
c=0.296
1500
1000
?5
10
500
?6
0
10
0
5
10
15
20
Component Number
25
0
10
30
1
10
2
3
10
10
Iteration Number
4
10
5
10
Figures 1 and 2.
4
Open problems
Several fundamental questions remain unanswered. First, the convergence rates of the two incremental schemes depend on the multiplier c in the learning rate ?n . If it is too low, convergence will
be slower than O(1/n). If it is too high, the constant in the rate of convergence will be large. Is
there a simple and practical scheme for setting c?
Second, what can be said about incrementally estimating the top p eigenvectors, for p > 1? Both
methods we consider extend easily to this case [10]; the estimate at time n is a d ? p matrix Vn
whose columns correspond to the eigenvectors, with the invariant VnT Vn = Ip always maintained.
In Oja?s algorithm, for instance, when a new data point Xn ? Rd arrives, the following update is
performed:
Wn = Vn?1 + ?n Xn XnT Vn?1
Vn = orth(Wn )
where the second step orthonormalizes the columns, for instance by Gram-Schmidt. It would be
interesting to characterize the rate of convergence of this scheme.
Finally, our analysis applies to a modified procedure in which the starting time no is artificially set
to a large constant. This seems unnecessary in practice, and it would be useful to extend the analysis
to the case where no = 0.
Acknowledgments
The authors are grateful to the National Science Foundation for support under grant IIS-1162581.
References
[1] A. Agarwal, O. Chapelle, M. Dud??k, and J. Langford. A reliable effective terascale linear
learning system. CoRR, abs/1110.4198, 2011.
[2] R. Arora, A. Cotter, K. Livescu, and N. Srebro. Stochastic optimization for PCA and PLS.
In 50th Annual Allerton Conference on Communication, Control, and Computing, pages 861?
868. 2012.
[3] R. Arora, A. Cotter, and N. Srebro. Stochastic optimization of PCA with capped MSG. In
Advances in Neural Information Processing Systems, 2013.
[4] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component
analysis. Machine Learning, 66(2-3):259?294, 2007.
[5] T. T. Cai, Z. Ma, and Y. Wu. Sparse PCA: Optimal rates and adaptive estimation. CoRR,
abs/1211.1309, 2012.
8
[6] R. Durrett. Probability: Theory and Examples. Duxbury, second edition, 1995.
[7] T.P. Krasulina. A method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix. USSR Computational Mathematics and Mathematical Physics,
9(6):189?195, 1969.
[8] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In Advances in
Neural Information Processing Systems, 2013.
[9] E. Oja. Subspace Methods of Pattern Recognition. Research Studies Press, 1983.
[10] E. Oja and J. Karhunen. On stochastic approximation of the eigenvectors and eigenvalues of
the expectation of a random matrix. Journal of Math. Analysis and Applications, 106:69?84,
1985.
[11] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex
stochastic optimization. In International Conference on Machine Learning, 2012.
[12] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing
Systems, 1997.
[13] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression database. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 25(12):1615?1618, 2003.
[14] V.Q. Vu and J. Lei. Minimax rates of estimation for sparse PCA in high dimensions. Journal
of Machine Learning Research - Proceedings Track, 22:1278?1286, 2012.
[15] M.K. Warmuth and D. Kuzmin. Randomized PCA algorithms with regret bounds that are
logarithmic in the dimension. In Advances in Neural Information Processing Systems. 2007.
[16] J. Weng, Y. Zhang, and W.-S. Hwang. Candid covariance-free incremental principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8):1034?
1040, 2003.
[17] L. Zwald and G. Blanchard. On the convergence of eigenspaces in kernel principal component
analysis. In Advances in Neural Information Processing Systems, 2005.
9
| 5132 |@word mild:2 version:1 seems:2 norm:1 stronger:1 open:1 d2:3 crucially:1 covariance:8 pick:3 sgd:2 vno:12 moment:5 necessity:1 reduction:1 series:1 exclusively:1 initial:6 written:1 fn:14 subsequent:1 additive:1 drop:2 update:10 n0:1 generative:2 prohibitive:1 intelligence:2 warmuth:2 oldest:1 characterization:1 math:1 allerton:1 zhang:1 mathematical:1 kvk2:1 c2:4 become:1 viable:1 consists:1 prove:1 expected:4 roughly:1 growing:1 little:1 considering:1 becomes:2 project:1 estimating:1 bounded:3 notation:2 moreover:1 baker:1 what:4 eigenvector:7 nj:19 impractical:1 guarantee:1 exactly:2 k2:5 control:1 unit:9 grant:1 yn:6 xno:6 positive:2 understood:1 consequence:1 krasulina:14 analyzing:1 might:1 initialization:3 studied:1 resembles:1 relaxing:1 co:10 limited:2 range:1 practical:2 acknowledgment:1 vu:1 practice:5 regret:1 procedure:3 empirical:1 pre:1 intention:1 seeing:1 get:4 cannot:1 close:2 influence:1 zwald:2 restriction:1 measurable:3 maximizing:1 go:1 attention:1 starting:4 convex:4 simplicity:2 adjusts:1 estimator:6 rule:1 proving:1 unanswered:1 coordinate:4 updated:1 diego:3 suppose:9 target:1 modulo:1 massive:1 shamir:1 us:2 livescu:1 recognition:1 updating:1 database:1 capture:1 worst:1 calculate:1 decrease:4 depend:1 solving:1 grateful:1 upon:1 efficiency:1 easily:2 various:3 caramanis:1 talk:1 jain:1 fast:2 effective:1 n2c:2 outcome:2 whose:1 larger:1 say:1 nondecreasing:1 noisy:1 final:4 online:1 ip:1 sequence:8 eigenvalue:8 cai:1 reconstruction:1 combining:1 realization:5 achieve:1 roweis:1 kv:1 convergence:19 double:1 optimum:1 generating:4 incremental:13 derive:1 stating:1 pose:1 progress:2 sim:1 c:3 quotient:3 skip:1 met:1 direction:3 closely:3 stochastic:11 subsequently:1 require:1 piecing:1 ao:2 hold:6 practically:1 around:1 sufficiently:1 exp:6 a2:1 purpose:1 estimation:2 cotter:2 rough:1 always:2 vbt:1 super:1 modified:1 rather:2 pn:3 varying:1 focus:2 improvement:2 sense:1 streaming:1 typically:2 initially:1 relation:3 relegated:1 interested:2 overall:2 among:1 exponent:1 ussr:1 initialize:2 uc:3 field:2 identical:3 look:1 tc2:1 inherent:1 few:2 oja:19 national:1 phase:3 consisting:1 n1:2 attempt:1 ab:3 interest:1 highly:1 weng:1 arrives:3 pc:1 eigenspaces:1 orthogonal:2 unless:1 divide:1 initialized:2 desired:1 instance:8 column:2 yoav:1 zn:11 cost:2 deviation:4 subset:2 too:2 characterize:1 ensues:1 answer:1 fundamental:1 international:1 randomized:1 physic:1 together:2 again:1 squared:1 initializer:1 successively:1 henceforth:1 expert:1 ekxn:1 return:1 potential:3 vnt:1 blanchard:2 satisfy:2 piece:2 performed:1 lot:1 picked:2 analyze:2 sup:7 portion:1 start:3 competitive:1 maintains:1 abalsubr:1 slope:1 variance:1 likewise:2 correspond:2 yield:3 vbn:1 halve:1 ed:2 definition:2 ty:1 regress:1 thereof:1 e2:1 proof:4 proved:1 dataset:3 popular:1 recall:4 knowledge:1 dimensionality:1 schedule:1 worry:1 specify:1 done:1 strongly:2 wildly:1 just:4 clock:2 until:2 sketch:1 langford:1 ei:7 incrementally:2 quality:1 behaved:1 hwang:1 lei:1 effect:1 multiplier:1 remedy:1 assigned:1 kxn:2 dud:1 nonzero:1 illustrated:1 deal:2 during:2 recurrence:3 supermartingale:1 maintained:1 outline:1 complete:1 image:1 recently:1 common:1 empirically:1 extend:2 rd:11 mathematics:1 pointed:1 pnj:1 chapelle:1 specification:2 surface:3 recent:1 perspective:1 scenario:1 certain:3 arbitrarily:1 vt:2 seen:1 analyzes:2 additional:1 captured:1 surely:1 converge:2 exn:1 period:1 monotonically:1 ii:1 multiple:2 technical:1 characterized:1 determination:1 long:1 sphere:5 e1:7 a1:3 halving:1 expectation:7 cmu:2 iteration:3 normalization:2 kernel:2 agarwal:1 achieved:1 receive:1 spacing:1 interval:1 elegant:1 sridharan:1 finale:1 call:2 integer:1 ee:1 near:1 intermediate:4 spca:1 enough:1 wn:2 intensive:1 shift:1 expression:1 pca:16 handled:1 buildup:1 speaking:1 remark:2 dramatically:1 useful:1 eigenvectors:5 amount:1 dotted:1 per:2 track:1 write:1 dasgupta:2 putting:1 drawn:3 kuk:1 nonconvexity:1 graph:1 merely:1 almost:3 v12:1 wu:1 vn:37 yfreund:1 appendix:3 bit:1 bound:12 followed:1 quadratic:2 nonnegative:1 annual:1 msg:1 constraint:2 x2:1 n3:1 bousquet:1 argument:2 span:1 extremely:2 performing:1 relatively:1 according:1 smaller:1 remain:3 describes:1 em:2 increasingly:1 modification:1 making:2 restricted:2 pr:8 invariant:1 taken:1 ln:4 computationally:1 remains:2 eventually:1 fail:1 bsat:1 know:1 end:1 adopted:1 lacked:1 apply:4 balsubramani:1 away:1 batch:1 schmidt:1 duxbury:1 slower:1 original:1 top:9 assumes:1 denotes:1 a4:1 build:1 establish:2 classical:1 move:1 question:2 quantity:3 occurs:1 concentration:1 dependence:1 said:1 gradient:6 subspace:2 distance:1 sensible:1 argue:1 reason:1 length:2 kk:1 nc:2 pie:2 sigma:2 filtration:1 xnt:7 implementation:1 unknown:1 perform:1 av:2 observation:1 finite:3 descent:5 situation:3 ever:1 precise:1 excluding:1 communication:1 ucsd:3 thm:1 specified:1 extensive:1 established:1 capped:1 below:2 pattern:3 including:5 memory:2 reliable:1 suitable:3 treated:1 nth:1 minimax:1 scheme:5 brief:1 arora:3 prior:1 epoch:8 l2:1 freund:1 expect:4 interesting:2 prototypical:1 proportional:2 srebro:2 foundation:1 terascale:1 course:1 free:1 guide:1 face:1 akshay:1 absolute:2 sparse:2 benefit:1 dimension:3 default:1 xn:19 avoids:1 gram:1 forward:1 made:1 author:1 san:3 adaptive:1 durrett:1 transaction:2 ignore:1 forever:1 symmetrical:1 unnecessary:1 don:1 spectrum:2 obtaining:1 e5:1 artificially:1 diag:1 main:2 bounding:1 edition:1 n2:6 repeated:1 ref:1 x1:1 kuzmin:2 fig:2 en:13 fashion:1 martingale:4 orth:1 wish:1 lie:1 theorem:12 specific:1 showing:1 rakhlin:1 a3:1 intrinsic:1 exists:1 restricting:1 corr:2 mitliagkas:1 illumination:1 karhunen:1 gap:1 rayleigh:3 logarithmic:1 simply:1 likely:2 pls:1 applies:1 nested:3 satisfies:2 ma:1 conditional:1 goal:2 careful:1 towards:1 change:2 specifically:1 except:1 uniformly:4 principal:4 lemma:14 sanjoy:1 enj:1 select:1 kvn:5 support:1 |
4,568 | 5,133 | Probabilistic Principal Geodesic Analysis
P. Thomas Fletcher
School of Computing
University of Utah
Salt Lake City, UT
[email protected]
Miaomiao Zhang
School of Computing
University of Utah
Salt Lake City, UT
[email protected]
Abstract
Principal geodesic analysis (PGA) is a generalization of principal component analysis (PCA) for dimensionality reduction of data on a Riemannian manifold. Currently PGA is defined as a geometric fit to the data, rather than as a probabilistic
model. Inspired by probabilistic PCA, we present a latent variable model for PGA
that provides a probabilistic framework for factor analysis on manifolds. To compute maximum likelihood estimates of the parameters in our model, we develop
a Monte Carlo Expectation Maximization algorithm, where the expectation is approximated by Hamiltonian Monte Carlo sampling of the latent variables. We
demonstrate the ability of our method to recover the ground truth parameters in
simulated sphere data, as well as its effectiveness in analyzing shape variability of
a corpus callosum data set from human brain images.
1
Introduction
Principal component analysis (PCA) [12] has been widely used to analyze high-dimensional data.
Tipping and Bishop proposed probabilistic PCA (PPCA) [22], which is a latent variable model for
PCA. A similar formulation was proposed by Roweis [18]. Their work opened up the possibility
for probabilistic interpretations for different kinds of factor analyses. For instance, Bayesian PCA
[3] extended PPCA by adding a prior on the factors, resulting in automatic selection of model dimensionality. Other examples of latent variable models include probabilistic canonical correlation
analysis (CCA) [1] and Gaussian process latent variable models [15]. Such latent variable models
have not, however, been extended to handle data from a Riemannian manifold.
Manifolds arise naturally as the appropriate representations for data that have smooth constraints.
For example, when analyzing directional data [16], i.e., vectors of unit length in Rn , the correct representation is the sphere, S n?1 . Another important example of manifold data is in shape analysis,
where the definition of the shape of an object should not depend on its position, orientation, or scale.
Kendall [14] was the first to formulate a mathematically precise definition of shape as equivalence
classes of all translations, rotations, and scalings of point sets. The result is a manifold representation of shape, or shape space. Linear operations violate the natural constraints of manifold data,
e.g., a linear average of data on a sphere results in a vector that does not have unit length. As shown
recently [5], using the kernel trick with a Gaussian kernel maps data onto a Hilbert sphere, and
utilizing Riemannian distances on this sphere rather than Euclidean distances improves clustering
and classification performance. Other examples of manifold data include geometric transformations,
such as rotations and affine transforms, symmetric positive-definite tensors [9, 24], Grassmannian
manifolds (the set of m-dimensional linear subspaces of Rn ), and Stiefel manifolds (the set of orthonormal m-frames in Rn ) [23]. There has been some work on density estimation on Riemannian
manifolds. For example, there is a wealth of literature on parametric density estimation for directional data [16], e.g., spheres, projective spaces, etc. Nonparametric density estimation based on
kernel mixture models [2] was proposed for compact Riemannian manifolds. Methods for sampling from manifold-valued distributions have also been proposed [4, 25]. It?s important to note
1
the distinction between manifold data, where the manifold representation is known a priori, versus
manifold learning and nonlinear component analysis [15, 20], where the data lies in Euclidean space
on some unknown, lower-dimensional manifold that must be learned.
Principal geodesic analysis (PGA) [10] generalizes PCA to nonlinear manifolds. It describes the
geometric variability of manifold data by finding lower-dimensional geodesic subspaces that minimize the residual sum-of-squared geodesic distances to the data. While [10] originally proposed an
approximate estimation procedure for PGA, recent contributions [19, 21] have developed algorithms
for exact solutions to PGA. Related work on manifold component analysis has introduced variants of
PGA. This includes relaxing the constraint that geodesics pass through the mean of the data [11] and,
for spherical data, replacing geodesic subspaces with nested spheres of arbitrary radius [13]. All of
these methods are based on geometric, least-squares estimation procedures, i.e., they find subspaces
that minimize the sum-of-squared geodesic distances to the data. Much like the original formulation
of PCA, current component analysis methods on manifolds lack a probabilistic interpretation. In this
paper, we propose a latent variable model for PGA, called probabilistic PGA (PPGA). The model
definition applies to generic manifolds. However, due to the lack of an explicit formulation for the
normalizing constant, our estimation is limited to symmetric spaces, which include many common
manifolds such as Euclidean space, spheres, Kendall shape spaces, Grassman/Stiefel manifolds, and
more. Analogous to PPCA, our method recovers low-dimensional factors as maximum likelihood.
2
Riemannian Geometry Background
In this section we briefly review some necessary facts about Riemannian geometry (see [6] for more
details). Recall that a Riemannian manifold is a differentiable manifold M equipped with a metric
g, which is a smoothly varying inner product on the tangent spaces of M . Given two vector fields
v, w on M , the covariant derivative ?v w gives the change of the vector field w in the v direction.
The covariant derivative is a generalization of the Euclidean directional derivative to the manifold
setting. Consider a curve ? : [0, 1] ? M and let ?? = d?/dt be its velocity. Given a vector field
V (t) defined along ?, we can define the covariant derivative of V to be DV
dt = ??? V . A vector field
is called parallel if the covariant derivative along the curve ? is zero. A curve ? is geodesic if it
satisfies the equation ??? ?? = 0. In other words, geodesics are curves with zero acceleration.
Recall that for any point p ? M and tangent vector v ? Tp M , the tangent space of M at p, there
is a unique geodesic curve ?, with initial conditions ?(0) = p and ?(0)
?
= v. This geodesic is only
guaranteed to exist locally. When ? is defined over the interval [0, 1], the Riemannian exponential
map at p is defined as Expp (v) = ?(1). In other words, the exponential map takes a position and
velocity as input and returns the point at time 1 along the geodesic with these initial conditions.
The exponential map is locally diffeomorphic onto a neighbourhood of p. Let V (p) be the largest
such neighbourhood. Then within V (p) the exponential map has an inverse, the Riemannian log
map, Logp : V (p) ? Tp M . For any point q ? V (p), the Riemannian distance function is given by
d(p, q) = k Logp (q)k. It will be convenient to include the point p as a parameter in the exponential
and log maps, i.e., define Exp(p, v) = Expp (v) and Log(p, q) = Logp (q). The gradient of the
squared distance function is ?p d(p, q)2 = ?2 Log(p, q).
3
Probabilistic Principal Geodesic Analysis
Before introducing our PPGA model for manifold data, we first review PPCA. The main idea of
PPCA is to model an n-dimensional Euclidean random variable y as
y = ? + Bx + ,
(1)
where ? is the mean of y, x is a q-dimensional latent variable, with x ? N (0, I), B is an n?q factor
matrix that relates x and y, and ? N (0, ? 2 I) represents error. We will find it convenient to model
the factors as B = W ?, where the columns of W are mutually orthogonal, and ? is a diagonal
matrix of scale factors. This removes the rotation ambiguity of the latent factors and makes them
analogous to the eigenvectors and eigenvalues of standard PCA (there is still of course an ambiguity
of the ordering of the factors). We now generalize this model to random variables on Riemannian
manifolds.
2
3.1
Probability Model
Following [8, 17], we use a generalization of the normal distribution for a Riemannian manifold as
our noise model. Consider a random variable y taking values on a Riemannian manifold M , defined
by the probability density function (pdf)
?
1
p(y|?, ? ) =
exp ? d(?, y)2 ,
C(?, ? )
2
Z
(2)
?
C(?, ? ) =
exp ? d(?, y)2 dy.
2
M
We term this distribution a Riemannian normal distribution, and use the notation y ? NM (?, ? ?1 )
to denote it. The parameter ? ? M acts as a location parameter on the manifold, and the parameter
? ? R+ acts as a dispersion parameter, similar to the precision of a Gaussian. This distribution has
the advantages that (1) it is applicable to any Riemannian manifold, (2) it reduces to a multivariate
normal distribution (with isotropic covariance) when M = Rn , and (3) much like the Euclidean normal distribution, maximum-likelihood estimation of parameters gives rise to least-squares methods
(see [8] for details). We note that this noise model could be replaced with a different distribution,
perhaps specific to the type of manifold or application, and the inference procedure presented in the
next section could be modified accordingly.
The PPGA model for a random variable y on a smooth Riemannian manifold M is
y|x ? NM Exp(?, z), ? ?1 , z = W ?x,
(3)
where x ? N (0, 1) are again latent random variables in Rq , ? here is a base point on M , W is
a matrix with q columns of mutually orthogonal tangent vectors in T? M , ? is a q ? q diagonal
matrix of scale factors for the columns of W , and ? is a scale parameter for the noise. In this
model, a linear combination of W ? and the latent variables x forms a new tangent vector z ? T? M .
Next, the exponential map shoots the base point ? by z to generate the location parameter of a
Riemannian normal distribution, from which the data point y is drawn. Note that in Euclidean
space, the exponential map is an addition operation, Exp(?, z) = ? + z. Thus, our model coincides
with (1), the standard PPCA model, when M = Rn .
3.2
Inference
We develop a maximum likelihood procedure to estimate the parameters ? = (?, W, ?, ? ) of the
PPGA model defined in (3). Given observed data yi ? {y1 , ..., yN } on M , with associated latent
variable xi ? Rq , and zi = W ?xi , we formulate an expectation maximization (EM) algorithm.
Since the expectation step over the latent variables does not yield a closed-form solution, we develop
a Hamiltonian Monte Carlo (HMC) method to sample xi from the posterior p(x|y; ?), the log of
which is given by
log
N
Y
p(xi |yi ; ?) ? ?N log C ?
i=1
N
X
?
i=1
2
2
d (Exp(?, zi ), yi ) ?
kxi k2
,
2
(4)
and use this in a Monte Carlo Expectation Maximization (MCEM) scheme to estimate ?. The
procedure contains two main steps:
3.2.1
E-step: HMC
For each xi , we draw a sample of size S from the posterior distribution (4) using HMC with the current estimated parameters ?k . Denote xij as the jth sample for xi , the Monte Carlo approximation
of the Q function is given by
"N
#
S N
Y
1 XX
k
k
Q(?|? ) = Exi |yi ;?k
log p(xi |yi ; ? ) ?
log p(xij |yi ; ?k ).
(5)
S
i=1
j=1 i=1
In our HMC sampling procedure, the potential energy of the Hamiltonian H(xi , m) = U (xi ) +
V (m) is defined as U (xi ) = ? log p(xi |yi ; ?), and the kinetic energy V (m) is a typical isotropic
3
Gaussian distribution on a q-dimensional auxiliary momentum variable, m. This gives us a Hamil?H
dm
?H
i
tonian system to integrate: dx
dt = ?m = m, and dt = ? ?xi = ??xi U . Due to the fact that xi is
a Euclidean variable, we use a standard ?leap-frog? numerical integration scheme, which approximately conserves the Hamiltonian and results in high acceptance rates.
The computation of the gradient term ?xi U (xi ) requires we compute dv Exp(p, v), i.e., the derivative operator (Jacobian matrix) of the exponential map with respect to the initial velocity v. To derive
this, consider a variation of geodesics c(s, t) = Exp(p, su + tv), where u ? Tp M . The variation
c produces a ?fan? of geodesics; this is illustrated for a sphere on the left side of Figure 1. Taking
the derivative of this variation results in a Jacobi field: Jv (t) = dc/ds(0, t). Finally, this gives an
expression for the exponential map derivative as
dv Exp(p, v)u = Jv (1).
(6)
For a general manifold, computing the Jacobi field Jv requires solving a second-order ordinary differential equation. However, Jacobi fields can be evaluated in closed-form for the class of manifolds
known as symmetric spaces. For the sphere and Kendall shape space examples, we provide explicit
formulas for these computations in Section 4. For more details on the derivation of the Jacobi field
equation and symmetric spaces, see for instance [6].
Now, the gradient with respect to each xi is
?xi U = xi ? ? ?W T {dzi Exp(?, zi )? Log(Exp(?, zi ), yi )},
(7)
where ? represents the adjoint of a linear operator, i.e.
hdzi Exp(?, zi )?
u, v?i = h?
u, dzi Exp(?, zi )? v?i.
3.2.2
M-step: Gradient Ascent
In this section, we derive the maximization step for updating the parameters ? = (?, W, ?, ? ) by
maximizing the HMC approximation of the Q function in (5). This turns out to be a gradient ascent
scheme for all the parameters since there are no closed-form solutions.
Gradient for ? : The gradient of the Q function with respect to ? requires evaulation of the derivative of the normalizing constant in the Riemannian normal distribution (2). When M is a symmetric
space, this constant does not depend on the mean parameter, ?, because the distribution is invariant
to isometrics (see [8] for details). Thus, the normalizing constant can be written as
Z
?
C(? ) =
exp ? d(?, y)2 dy.
2
M
We can rewrite this integral in normal coordinates, which can be thought of as a polar coordinate system in the tangent space, T? M . The radial coordinate is defined as r = d(?, y), and the remaining
n ? 1 coordinates are parametrized by a unit vector v, i.e., a point on the unit sphere S n?1 ? T? M .
Thus we have the change-of-variables, ?(rv) = Exp(?, rv). Now the integral for the normalizing
constant becomes
Z
Z R(v)
?
C(? ) =
exp ? r2 | det(d?(rv))|dr dv,
(8)
2
S n?1 0
where R(v) is the maximum distance that ?(rv) is defined. Note that this formula is only valid if
M is a complete manifold, which guarantees that normal coordinates are defined everywhere except
possibly a set of measure zero on M .
The integral in (8) is difficult to compute for general manifolds, due to the presence of the determinant of the Jacobian of ?. However, for symmetric spaces this change-of-variables term has a simple
form. If M is a symmetric space, there exists a orthonormal basis u1 , . . . , un , with u1 = v, such
that
n
Y
?
1
| det(d?(rv))| =
(9)
? fk ( ?k r),
?k
k=2
4
where ?k = K(u1 , uk ) denotes the sectional curvature, and fk is defined as
?
if ?k > 0,
?sin(x)
fk (x) = sinh(x) if ?k < 0,
?
x
if ?k = 0.
Notice that with this expression for the Jacobian determinant there is no longer a dependence on v
inside the integral in (8). Also, if M is simply connected, then R(v) = R does not depend on the
direction v, and we can write the normalizing constant as
Z R
n
? Y
?
?1/2
C(? ) = An?1
exp ? r2
?k fk ( ?k r)dr,
2
0
k=2
where An?1 is the surface area of the n ? 1 hypersphere, S n?1 . The remaining integral is onedimensional, and can be quickly and accurately approximated by numerical integration. While
this formula works only for simply connected symmetric spaces, other symmetric spaces could be
handled by lifting to the universal cover, which is simply connected, or by restricting the definition
of the Riemannian normal pdf in (2) to have support only up to the injectivity radius, i.e., R =
minv R(v).
The gradient term for estimating ? is
Z R 2
N X
S
n
? Y
X
?
1
1
r
?1/2
2
?? Q =
?k fk ( ?k r)dr? d(Exp(?, zij ), yi )2 dr.
An?1
exp ? r
C(?
)
2
2
2
0
i=1 j=1
k=2
Gradient for ?: From (4) and (5), the gradient term for updating ? is
?? Q =
S
N
1 XX
? d? Exp(?, zij )? Log (Exp(?, zij ), yi ).
S i=1 j=1
Here the derivative d? Exp(?, v) is with respect to the base point, ?. Similar to before (6),
this derivative can be derived from a variation of geodesics: c(s, t) = Exp(Exp(?, su), tv(s)),
where v(s) comes from parallel translating v along the geodesic Exp(?, su). Again, the derivative of the exponential map is given by a Jacobi field satisfying J? (t) = dc/ds(0, t), and we have
d? Exp(?, v) = J? (1).
Gradient for ?: For updating ?, we take the derivative w.r.t. each ath diagonal element ?a as
S
N
?Q
1 XX
? (W a xaij )T {dzij Exp(?, zij )? Log(Exp(?, zij ), yi )},
=
??a
S i=1 j=1
where W a denotes the ath column of W , and xaij is the ath component of xij .
Gradient for W : The gradient w.r.t. W is
?W Q =
N
S
1 XX
? dzij Exp(?, zij )? Log(Exp(?, zij ), yi )xTij ?.
S i=1 j=1
(10)
To preserve the mutual orthogonality constraint on the columns of W , we represent W as a point
on the Stiefel manifold Vq (T? M ), i.e., the space of orthonormal q-frames in T? M . We project the
gradient in (10) onto the tangent space TW Vq (T? M ), and then update W by taking a small step
along the geodesic in the projected gradient direction. For details on the geodesic computations for
Stiefel manifolds, see [7].
The MCEM algorithm for PPGA is an iterative procedure for finding the subspace spanned by q
principal components, shown in Algorithm 1. The computation time per iteration depends on the
complexity of exponential map, log map, and Jacobi field which may vary for different manifold.
Note the cost of the gradient ascent algorithm also linearly depends on the data size, dimensionality,
and the number of samples drawn. An advantage of MCEM is that it can run in parallel for each
data point.
5
Algorithm 1 Monte Carlo Expectation Maximization for Probabilistic Principal Geodesic Analysis
Input: Data set Y , reduced dimension q.
Initialize ?, W, ?, ?.
repeat
Sample X according to (7),
Update ?, W, ?, ? by gradient ascent in Section 3.2.2.
until convergence
4
Experiments
In this section, we demonstrate the effectiveness of PPGA and our ML estimation using both simulated data on the 2D sphere and a real corpus callosum data set. Before presenting the experiments
of PPGA, we briefly review the necessary computations for the specific types of manifolds used,
including, the Riemannian exponential map, log map, and Jacobi fields.
4.1
Simulated Sphere Data
Sphere geometry overview: Let p be a point on an n-dimensional sphere embedded in Rn+1 , and
let v be a tangent at p. The inner product between tangents at a base point p is the usual Euclidean
inner product. The exponential map is given by a 2D rotation of p by an angle given by the norm of
the tangent, i.e.,
sin ?
Exp(p, v) = cos ? ? p +
? v, ? = kvk.
(11)
?
The log map between two points p, q on the sphere can be computed by finding the initial velocity
of the rotation between the two points. Let ?p (q) = p ? hp, qi denote the projection of the vector q
onto p. Then,
? ? (q ? ?p (q))
Log(p, q) =
, ? = arccos(hp, qi).
(12)
kq ? ?p (q)k
All sectional curvatures for S n are equal to one. The adjoint derivatives of the exponential map are
given by
dp Exp(p, v)? w = cos(kvk)w? + w> ,
dv Exp(p, v)? w =
sin(kvk) ?
w + w> ,
kvk
where w? , w> denote the components of w that are orthogonal and tangent to v, respectively. An
illustration of geodesics and the Jacobi fields that give rise to the exponential map derivatives is
shown in Figure 1.
Parameter estimation on the sphere: Using our generative model for PGA (3), we forward
simulated a random sample of 100 data points on the unit sphere S 2 , with known parameters ? =
(?, W, ?, ? ), shown in Table 1. Next, we ran our maximum likelihood estimation procedure to test
whether we could recover those parameters. We initialized ? from a random uniform point on the
sphere. We initialized W as a random Gaussian matrix, to which we then applied the Gram-Schmidt
algorithm to ensure its columns were orthonormal. Figure 1 compares the ground truth principal
geodesics and MLE principal geodesic analysis using our algorithm. A good overlap between the
first principal geodesic shows that PPGA recovers the model parameters.
One advantage that our PPGA model has over the least-squares PGA formulation is that the mean
point is estimated jointly with the principal geodesics. In the standard PGA algorithm, the mean
is estimated first (using geodesic least-squares), then the principal geodesics are estimated second.
This does not make a difference in the Euclidean case (principal components must pass through the
mean), but it does in the nonlinear case. We compared our model with PGA and standard PCA (in
the Euclidean embedding space). The estimation error of principal geodesics turned to be larger in
PGA compared to our model. Furthermore, the standard PCA converges to an incorrect solution due
to its inappropriate use of a Euclidean metric on Riemannian data. A comparison of the ground truth
parameters and these methods is given in Table 1. Note that the noise precision ? is not a part of
either the PGA or PCA models.
6
?
(?0.78, 0.48, ?0.37)
(?0.78, 0.48, ?0.40)
(?0.79, 0.46, ?0.41)
(?0.70, 0.41, ?0.46)
Ground truth
PPGA
PGA
PCA
w
(?0.59, ?0.42, 0.68)
(?0.59, ?0.43, 0.69)
(?0.59, ?0.38, 0.70)
(?0.62, ?0.37, 0.69)
?
0.40
0.41
0.41
0.38
?
100
102
N/A
N/A
Table 1: Comparison between ground truth parameters for the simulated data and the MLE of PPGA,
non-probabilistic PGA, and standard PCA.
v
p
J(x)
M
Figure 1: Left: Jacobi fields; Right: the principal geodesic of random generated data on unit sphere.
Blue dots: random generated sphere data set. Yellow line: ground truth principal geodesic. Red
line: estimated principal geodesic using PPGA.
4.2
Shape Analysis of the Corpus Callosum
Shape space geometry: A configuration of k points in the 2D plane is considered as a complex
k-vector, z ? Ck . Removing translation, by requiring
the centroid to be zero, projects this point to
P
the linear complex subspace V = {z ? Ck :
zi = 0}, which is equivalent to the space Ck?1 .
Next, points in this subspace are deemed equivalent if they are a rotation and scaling of each other,
which can be represented as multiplication by a complex number, ?ei? , where ? is the scaling factor
and ? is the rotation angle. The set of such equivalence classes forms the complex projective space,
CP k?2 .
We think of a centered shape p ? V as representing the complex line Lp = {z ? p : z ? C\{0} },
i.e., Lp consists of all point configurations with the same shape as p. A tangent vector at Lp ? V is
a complex vector, v ? V , such that hp, vi = 0. The exponential map is given by rotating (within V )
the complex line Lp by the initial velocity v, that is,
kpk sin ?
? v, ? = kvk.
(13)
?
Likewise, the log map between two shapes p, q ? V is given by finding the initial velocity of the
rotation between the two complex lines Lp and Lq . Let ?p (q) = p?hp, qi/kpk2 denote the projection
of the vector q onto p. Then the log map is given by
Exp(p, v) = cos ? ? p +
Log(p, q) =
? ? (q ? ?p (q))
,
kq ? ?p (q)k
? = arccos
|hp, qi|
.
kpkkqk
(14)
The sectional curvatures of CP k?2 , ?i = K(ui , v), used ?
in (9), can be computed as follows. Let
u1 = i ? v, where we treat v as a complex vector and i = ?1. The remaining u2 , . . . , un can be
chosen arbitrarily to be construct an orthonormal frame with v and u1 Then we have K(u1 , v) = 4
and K(ui , v) = 1 for i > 1. The adjoint derivatives of the exponential map are given by
dp Exp(p, v)? w = cos(kvk)w1? + cos(2kvk)w2? + w> ,
sin(kvk) ? sin(2kvk)
dv Exp(p, v)? w =
w1 +
+ w2> ,
kvk
2kvk
where w1? denotes the component of w parallel to u1 , i.e., w1? = hw, u1 iu1 , u>
2 denotes the remaining orthogonal component of w, and w> denotes the component tangent to v.
7
Shape variability of corpus callosum data: As a demonstration of PPGA on Kendall shape
space, we applied it to corpus callosum shape data derived from the OASIS database (www.
oasis-brains.org). The data consisted of magnetic resonance images (MRI) from 32 healthy
adult subjects. The corpus callosum was segmented in a midsagittal slice using the ITK SNAP
program (www.itksnap.org). An example of a segmented corpus callosum in an MRI is
shown in Figure 2. The boundaries of these segmentations were sampled with 64 points using ShapeWorks (www.sci.utah.edu/software.html). This algorithm generates a sampling of a set of shape boundaries while enforcing correspondences between different point models within the population. Figure 2 displays the first two modes of corpus callosum shape variation, generated from the as points along the estimated principal geodesics: Exp(?, ?i wi ), where
?i = ?3?i , ?1.5?i , 0, 1.5?i , 3?i , for i = 1, 2.
? 3?1
? 1.5?1
0
1.5?1
3?1
? 3?2
? 1.5?2
0
1.5?2
3?2
Figure 2: Left: example corpus callosum segmentation from an MRI slice. Middle to right: first and
second PGA mode of shape variation with ?3, ?1.5, 1.5, and 3 ? ?.
5
Conclusion
We presented a latent variable model of PGA on Riemannian manifolds. We developed a Monte
Carlo Expectation Maximization for maximum likelihood estimation of parameters that uses Hamiltonian Monte Carlo to integrate over the posterior distribution of latent variables. This work takes the
first step to bring latent variable models to Riemannian manifolds. This opens up several possibilities for new factor analyses on Riemannian manifolds, including a rigorous formulation for mixture
models of PGA and automatic dimensionality selection with a Bayesian formulation of PGA.
Acknowledgments This work was supported in part by NSF CAREER Grant 1054057.
References
[1] F. R. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis.
Technical Report 608, Department of Statistics, University of California, Berkeley, 2005.
[2] A. Bhattacharya and D. B. Dunson. Nonparametric bayesian density estimation on manifolds
with applications to planar shapes. Biometrika, 97(4):851?865, 2010.
[3] C. M. Bishop. Bayesian PCA. Advances in neural information processing systems, pages
382?388, 1999.
[4] S. Byrne and M. Girolami. Geodesic Monte Carlo on embedded manifolds. arXiv preprint
arXiv:1301.6064, 2013.
[5] N. Courty, T. Burger, and P. F. Marteau. Geodesic analysis on the Gaussian RKHS hypersphere.
In Machine Learning and Knowledge Discovery in Databases, pages 299?313, 2012.
[6] M. do Carmo. Riemannian Geometry. Birkh?auser, 1992.
[7] A. Edelman, T. A Arias, and S. T Smith. The geometry of algorithms with orthogonality
constraints. SIAM journal on Matrix Analysis and Applications, 20(2):303?353, 1998.
[8] P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds.
International Journal of Computer Vision, pages 1?15, 2012.
[9] P. T. Fletcher and S. Joshi. Principal geodesic analysis on symmetric spaces: statistics of
diffusion tensors. In Workshop on Computer Vision Approaches to Medical Image Analysis
(CVAMIA), 2004.
8
[10] P. T. Fletcher, C. Lu, and S. Joshi. Statistics of shape via principal geodesic analysis on Lie
groups. In Computer Vision and Pattern Recognition, pages 95?101, 2003.
[11] S. Huckemann and H. Ziezold. Principal component analysis for Riemannian manifolds, with
an application to triangular shape spaces. Advances in Applied Probability, 38(2):299?319,
2006.
[12] I. T. Jolliffe. Principal Component Analysis, volume 487. Springer-Verlag New York, 1986.
[13] S. Jung, I. L. Dryden, and J. S. Marron. Analysis of principal nested spheres. Biometrika,
99(3):551?568, 2012.
[14] D. G. Kendall. Shape manifolds, Procrustean metrics, and complex projective spaces. Bulletin
of the London Mathematical Society, 16:18?121, 1984.
[15] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional
data. Advances in neural information processing systems, 16:329?336, 2004.
[16] K. V. Mardia. Directional Statistics. John Wiley and Sons, 1999.
[17] X. Pennec. Intrinsic statistics on Riemannian manifolds: basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 25(1), 2006.
[18] S. Roweis. EM algorithms for PCA and SPCA. Advances in neural information processing
systems, pages 626?632, 1998.
[19] S. Said, N. Courty, N. Le Bihan, and S. J. Sangwine. Exact principal geodesic analysis for
data on SO(3). In Proceedings of the 15th European Signal Processing Conference, pages
1700?1705, 2007.
[20] B. Sch?olkopf, A. Smola, and K. R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299?1319, 1998.
[21] S. Sommer, F. Lauze, S. Hauberg, and M. Nielsen. Manifold valued statistics, exact principal
geodesic analysis and the effect of linear approximations. In Proceedings of the European
Conference on Computer Vision, pages 43?56, 2010.
[22] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 61(3):611?622, 1999.
[23] P. Turaga, A. Veeraraghavan, A. Srivastava, and R. Chellappa. Statistical computations on
Grassmann and Stiefel manifolds for image and video-based recognition. IEEE Trans. Pattern
Analysis and Machine Intelligence, 33(11):2273?2286, 2011.
[24] O. Tuzel, F. Porikli, and P. Meer. Pedestrian detection via classification on Riemannian manifolds. IEEE Trans. Pattern Analysis and Machine Intelligence, 30(10):1713?1727, 2008.
[25] M. Zhang, N. Singh, and P. T. Fletcher. Bayesian estimation of regularization and atlas building
in diffeomorphic image registration. In Information Processing in Medical Imaging, pages 37?
48. Springer, 2013.
9
| 5133 |@word determinant:2 mri:3 briefly:2 middle:1 norm:1 open:1 covariance:1 reduction:1 initial:6 configuration:2 contains:1 series:1 zij:7 rkhs:1 current:2 dx:1 must:2 written:1 john:1 numerical:2 shape:23 remove:1 atlas:1 update:2 generative:1 intelligence:2 accordingly:1 plane:1 isotropic:2 hamiltonian:5 smith:1 hypersphere:2 provides:1 location:2 lauze:1 org:2 zhang:2 mathematical:2 along:6 differential:1 incorrect:1 consists:1 edelman:1 inside:1 brain:2 inspired:1 spherical:1 equipped:1 inappropriate:1 becomes:1 project:2 xx:4 notation:1 estimating:1 burger:1 kind:1 developed:2 finding:4 transformation:1 porikli:1 guarantee:1 berkeley:1 act:2 biometrika:2 k2:1 uk:1 unit:6 grant:1 medical:2 yn:1 positive:1 before:3 treat:1 analyzing:2 approximately:1 frog:1 equivalence:2 relaxing:1 co:5 limited:1 projective:3 unique:1 acknowledgment:1 minv:1 definite:1 procedure:8 tuzel:1 area:1 universal:1 thought:1 convenient:2 projection:2 word:2 radial:1 onto:5 selection:2 operator:2 www:3 equivalent:2 map:24 maximizing:1 formulate:2 utilizing:1 orthonormal:5 spanned:1 embedding:1 handle:1 population:1 variation:6 coordinate:5 analogous:2 meer:1 exact:3 us:1 iu1:1 trick:1 velocity:6 element:1 approximated:2 conserve:1 updating:3 satisfying:1 recognition:2 database:2 observed:1 preprint:1 connected:3 ordering:1 ran:1 rq:2 complexity:1 ui:2 geodesic:40 depend:3 mcem:3 solving:1 rewrite:1 singh:1 basis:1 exi:1 represented:1 derivation:1 london:1 monte:9 birkh:1 chellappa:1 widely:1 valued:2 larger:1 snap:1 triangular:1 ability:1 statistic:6 think:1 jointly:1 advantage:3 differentiable:1 eigenvalue:2 propose:1 product:3 turned:1 ath:3 roweis:2 adjoint:3 olkopf:1 convergence:1 produce:1 converges:1 object:1 derive:2 develop:3 school:2 auxiliary:1 dzi:2 come:1 girolami:1 direction:3 radius:2 correct:1 opened:1 centered:1 human:1 translating:1 generalization:3 mathematically:1 considered:1 ground:6 normal:9 exp:37 fletcher:6 lawrence:1 vary:1 estimation:14 polar:1 applicable:1 leap:1 currently:1 healthy:1 largest:1 city:2 callosum:9 tool:1 uller:1 gaussian:7 modified:1 rather:2 ck:3 varying:1 derived:2 likelihood:6 rigorous:1 centroid:1 diffeomorphic:2 hauberg:1 inference:2 visualisation:1 classification:2 orientation:1 html:1 priori:1 arccos:2 resonance:1 integration:2 initialize:1 mutual:1 auser:1 field:13 equal:1 construct:1 sampling:4 represents:2 report:1 preserve:1 replaced:1 geometry:6 detection:1 acceptance:1 possibility:2 mixture:2 kvk:11 integral:5 necessary:2 orthogonal:4 euclidean:12 initialized:2 rotating:1 instance:2 column:6 cover:1 tp:3 logp:3 maximization:6 ordinary:1 cost:1 introducing:1 kq:2 uniform:1 marron:1 kxi:1 density:5 international:1 siam:1 probabilistic:14 quickly:1 w1:4 squared:3 ambiguity:2 nm:2 again:2 possibly:1 dr:4 derivative:16 return:1 bx:1 potential:1 includes:1 pedestrian:1 depends:2 vi:1 closed:3 kendall:5 analyze:1 red:1 recover:2 parallel:4 contribution:1 minimize:2 square:5 likewise:1 yield:1 directional:4 yellow:1 generalize:1 bayesian:5 accurately:1 lu:1 carlo:9 kpk:1 definition:4 energy:2 dm:1 naturally:1 associated:1 riemannian:30 recovers:2 jacobi:9 sampled:1 ppca:6 recall:2 knowledge:1 ut:2 dimensionality:4 improves:1 hilbert:1 segmentation:2 nielsen:1 veeraraghavan:1 originally:1 tipping:2 dt:4 isometric:1 planar:1 methodology:1 formulation:6 evaluated:1 furthermore:1 smola:1 correlation:2 d:2 until:1 bihan:1 replacing:1 su:3 nonlinear:4 ei:1 lack:2 mode:2 perhaps:1 utah:5 building:1 effect:1 requiring:1 consisted:1 hamil:1 byrne:1 regularization:1 symmetric:10 illustrated:1 sin:6 coincides:1 procrustean:1 pdf:2 presenting:1 complete:1 demonstrate:2 cp:2 bring:1 stiefel:5 image:5 shoot:1 recently:1 common:1 rotation:8 overview:1 salt:2 volume:1 interpretation:3 onedimensional:1 measurement:1 automatic:2 fk:5 hp:5 dot:1 longer:1 surface:1 etc:1 base:4 curvature:3 multivariate:1 posterior:3 recent:1 verlag:1 carmo:1 pennec:1 arbitrarily:1 yi:12 injectivity:1 signal:1 relates:1 violate:1 rv:5 reduces:1 smooth:2 segmented:2 technical:1 bach:1 sphere:22 mle:2 grassmann:1 qi:4 variant:1 regression:1 basic:1 vision:5 expectation:7 metric:3 arxiv:2 iteration:1 kernel:4 represent:1 itk:1 background:1 addition:1 interval:1 wealth:1 sch:1 w2:2 ascent:4 midsagittal:1 subject:1 effectiveness:2 jordan:1 joshi:2 presence:1 spca:1 fit:1 zi:7 inner:3 idea:1 det:2 whether:1 expression:2 pca:16 handled:1 york:1 eigenvectors:1 transforms:1 nonparametric:2 locally:2 reduced:1 generate:1 exist:1 xij:3 canonical:2 nsf:1 notice:1 estimated:6 per:1 tonian:1 blue:1 write:1 group:1 drawn:2 jv:3 registration:1 diffusion:1 imaging:2 sum:2 run:1 inverse:1 everywhere:1 angle:2 lake:2 draw:1 dy:2 scaling:3 pga:21 cca:1 guaranteed:1 sinh:1 display:1 correspondence:1 fan:1 constraint:5 orthogonality:2 kpk2:1 software:1 generates:1 u1:8 department:1 tv:2 according:1 turaga:1 combination:1 describes:1 em:2 son:1 wi:1 lp:5 tw:1 dv:6 invariant:1 equation:3 mutually:2 vq:2 turn:1 jolliffe:1 generalizes:1 operation:2 appropriate:1 generic:1 magnetic:1 neighbourhood:2 bhattacharya:1 schmidt:1 original:1 thomas:1 denotes:5 clustering:1 include:4 remaining:4 ensure:1 sommer:1 xtij:1 society:2 tensor:2 miaomiao:2 parametric:1 dependence:1 usual:1 diagonal:3 said:1 gradient:17 dp:2 subspace:7 distance:7 grassmannian:1 sci:3 simulated:5 parametrized:1 manifold:55 enforcing:1 length:2 illustration:1 demonstration:1 difficult:1 hmc:5 dunson:1 rise:2 unknown:1 dispersion:1 extended:2 variability:3 precise:1 frame:3 rn:6 y1:1 dc:2 arbitrary:1 introduced:1 california:1 distinction:1 learned:1 trans:2 adult:1 pattern:3 program:1 including:2 royal:1 video:1 overlap:1 natural:1 residual:1 representing:1 scheme:3 deemed:1 prior:1 geometric:5 literature:1 review:3 tangent:13 expp:2 multiplication:1 discovery:1 embedded:2 versus:1 integrate:2 affine:1 translation:2 course:1 jung:1 repeat:1 supported:1 jth:1 side:1 taking:3 bulletin:1 slice:2 curve:5 dimension:1 boundary:2 valid:1 gram:1 forward:1 projected:1 approximate:1 compact:1 ml:1 corpus:9 xi:19 un:2 latent:17 iterative:1 table:3 career:1 complex:10 european:2 main:2 linearly:1 noise:4 arise:1 courty:2 wiley:1 precision:2 position:2 momentum:1 explicit:2 exponential:17 lq:1 lie:2 mardia:1 jacobian:3 hw:1 formula:3 removing:1 bishop:3 specific:2 r2:2 normalizing:5 exists:1 workshop:1 intrinsic:1 restricting:1 adding:1 aria:1 lifting:1 smoothly:1 simply:3 sectional:3 u2:1 applies:1 springer:2 covariant:4 nested:2 truth:6 satisfies:1 oasis:2 kinetic:1 acceleration:1 change:3 typical:1 except:1 principal:27 called:2 pas:2 grassman:1 support:1 dryden:1 srivastava:1 |
4,569 | 5,134 | Fast Algorithms for Gaussian Noise Invariant
Independent Component Analysis
Luis Rademacher
James Voss
Ohio State University
Ohio State University
Computer Science and Engineering,
Computer Science and Engineering,
2015 Neil Avenue, Dreese Labs 495.
2015 Neil Avenue, Dreese Labs 586.
Columbus, OH 43210
Columbus, OH 43210
[email protected]
[email protected]
Mikhail Belkin
Ohio State University
Computer Science and Engineering,
2015 Neil Avenue, Dreese Labs 597.
Columbus, OH 43210
[email protected]
Abstract
The performance of standard algorithms for Independent Component Analysis
quickly deteriorates under the addition of Gaussian noise. This is partially due
to a common first step that typically consists of whitening, i.e., applying Principal Component Analysis (PCA) and rescaling the components to have identity
covariance, which is not invariant under Gaussian noise.
In our paper we develop the first practical algorithm for Independent Component
Analysis that is provably invariant under Gaussian noise. The two main contributions of this work are as follows:
1. We develop and implement an efficient, Gaussian noise invariant decorrelation
(quasi-orthogonalization) algorithm using Hessians of the cumulant functions.
2. We propose a very simple and efficient fixed-point GI-ICA (Gradient Iteration
ICA) algorithm, which is compatible with quasi-orthogonalization, as well as with
the usual PCA-based whitening in the noiseless case. The algorithm is based on
a special form of gradient iteration (different from gradient descent). We provide
an analysis of our algorithm demonstrating fast convergence following from the
basic properties of cumulants. We also present a number of experimental comparisons with the existing methods, showing superior results on noisy data and very
competitive performance in the noiseless case.
1
Introduction and Related Works
In the Blind Signal Separation setting, it is assumed that observed data is drawn from an unknown
distribution. The goal is to recover the latent signals under some appropriate structural assumption.
A prototypical setting is the so-called cocktail party problem: in a room, there are d people speaking
simultaneously and d microphones, with each microphone capturing a superposition of the voices.
The objective is to recover the speech of each individual speaker. The simplest modeling assumption
is to consider each speaker as producing a signal that is a random variable independent of the others,
and to take the superposition to be a linear transformation independent of time. This leads to the
following formalization: We observe samples from a random vector x distributed according to the
equation x = As + b + ? where A is a linear mixing matrix, b ? Rd is a constant vector, s is a
latent random vector with independent coordinates, and ? is an unknown random noise independent
1
of s. For simplicity, we assume A ? Rd?d is square and of full rank. The latent components of s
are viewed as containing the information describing the makeup of the observed signal (voices of
individual speakers in the cocktail party setting). The goal of Independent Component Analysis is
to approximate the matrix A in order to recover the latent signal s. In practice, most methods ignore
the noise term, leaving the simpler problem of recovering the mixing matrix A when x = As is
observed.
Arguably the two most widely used ICA algorithms are FastICA [13] and JADE [6]. Both of these
algorithms are based on a two step process:
(1) The data is centered and whitened, that is, made to have identity covariance matrix. This is
typically done using principal component analysis (PCA) and rescaling the appropriate components.
In the noiseless case this procedure orthogonalizes and rescales the independent components and
thus recovers A up to an unknown orthogonal matrix R.
(2) Recover the orthogonal matrix R.
Most practical ICA algorithms differ only in the second step. In FastICA, various objective functions
are used to perform a projection pursuit style algorithm which recovers the columns of R one at a
time. JADE uses a fourth-cumulant based technique to simultaneously recover all columns of R.
Step 1 of ICA is affected by the addition of a Gaussian noise. Even if the noise is white (has a scalar
times identity covariance matrix) the PCA-based whitening procedure can no longer guarantee the
whitening of the underlying independent components. Hence, the second step of the process is no
longer justified. This failure may be even more significant if the noise is not white, which is likely to
be the case in many practical situations. Recent theoretical developments (see, [2] and [3]) consider
the case where the noise ? is an arbitrary (not necessarily white) additive Gaussian variable drawn
independently from s.
In [2], it was observed that certain cumulant-based techniques for ICA can still be applied for the
second step if the underlying signals can be orthogonalized.1 Orthogonalization of the latent signals (quasi-orthogonalization) is a significantly less restrictive condition as it does not force the
underlying signal to have identity covariance (as in whitening in the noiseless case). In the noisy
setting, the usual PCA cannot achieve quasi-orthogonalization as it will whiten the mixed signal, but
not the underlying components. In [3], we show how quasi-orthogonalization can be achieved in a
noise-invariant way through a method based on the fourth-order cumulant tensor. However, a direct
implementation of that method requires estimating the full fourth-order cumulant tensor, which is
computationally challenging even in relatively low dimensions. In this paper we derive a practical
version of that algorithm based on directional Hessians of the fourth univariate cumulant, thus reducing the complexity dependence on the data dimensionality from d4 to d3 , and also allowing for
a fully vectorized implementation.
We also develop a fast and very simple gradient iteration (not to be confused with gradient descent)
algorithm, GI-ICA, which is compatible with the quasi-orthogonalization step and can be shown to
have convergence of order r ? 1, when implemented using a univariate cumulant of order r. For the
cumulant of order four, commonly used in practical applications, we obtain cubic convergence. We
show how these convergence rates follow directly from the properties of the cumulants, which sheds
some light on the somewhat surprising cubic convergence seen in fourth-order based ICA methods
[13, 18, 22]. The update step has complexity O(N d) where N is the number of samples, giving a
total algorithmic complexity of O(N d3 ) for step 1 and O(N d2 t) for step 2, where t is the number
of iterations for convergence in the gradient iteration.
Interestingly, while the techniques are quite different, our gradient iteration algorithm turns out to
be closely related to Fast ICA in the noiseless setting, in the case when the data is whitened and the
cumulants of order three or four are used. Thus, GI-ICA can be viewed as a generalization (and a
conceptual simplification) of Fast ICA for more general quasi-orthogonalized data.
We present experimental results showing superior performance in the case of data contaminated
by Gaussian noise and very competitive performance for clean data. We also note that the GIICA algorithms are fast in practice, allowing us to process (decorrelate and detect the independent
1
This process of orthogonalizing the latent signals was called quasi-whitening in [2] and later in [3]. However, this conflicts with the definition of quasi-whitening given in [12] which requires the latent signals to be
whitened. To avoid the confusion we will use the term quasi-orthogonalization for the process of orthogonalizing the latent signals.
2
components) 100 000 points in dimension 5 in well under a second on a standard desktop computer.
Our Matlab implementation of GI-ICA is available for download at http://sourceforge.
net/projects/giica/.
Finally, we observe that our method is partially compatible with the robust cumulants introduced
in [20]. We briefly discuss how GI-ICA can be extended using these noise-robust techniques for
ICA to reduce the impact of sparse noise.
The paper is organized as follows. In section 2, we discuss the relevant properties of cumulants,
and discuss results from prior work which allows for the quasi-orthogonalization of signals with
non-zero fourth cumulant. In section 3, we discuss the connection between the fourth-order cumulant tensor method for quasi-orthogonalization discussed in section 2 with Hessian-based techniques
seen in [2] and [11]. We use this connection to create a more computationally efficient and practically implementable version of the quasi-orthogonalization algorithm discussed in section 2. In
section 4, we discuss new, fast, projection-pursuit style algorithms for the second step of ICA which
are compatible with quasi-orthogonalization. In order to simplify the presentation, all algorithms
are stated in an abstract form as if we have exact knowledge of required distribution parameters.
Section 5 discusses the estimators of required distribution parameters to be used in practice. Section
6 discusses numerical experiments demonstrating the applicability of our techniques.
Related Work. The name Independent Component Analysis refers to a broad range of algorithms
addressing the blind signal separation problem as well as its variants and extensions. There is an
extensive literature on ICA in the signal processing and machine learning communities due to its
applicability to a variety of important practical situations. For a comprehensive introduction see
the books [8, 14]. In this paper we develop techniques for dealing with noisy data by introducing
new and more efficient techniques for quasi-orthogonalization and subsequent component recovery.
The quasi-orthogonalization step was introduced in [2], where the authors proposed an algorithm
for the case when the fourth cumulants of all independent components are of the same sign. A
general algorithm with complete theoretical analysis was provided in [3]. That algorithm required
estimating the full fourth-order cumulant tensor.
We note that Hessian based techniques for ICA were used in [21, 2, 11], with [11] and [2] using the
Hessian of the fourth-order cumulant. The papers [21] and [11] proposed interesting randomized
one step noise-robust ICA algorithms based on the cumulant generating function and the fourth
cumulant respectively in primarily theoretical settings. The gradient iteration algorithm proposed is
closely related to the work [18], which provides a gradient-based algorithm derived from the fourth
moment with cubic convergence to learn an unknown parallelepiped in a cryptographic setting. For
the special case of the fourth cumulant, the idea of gradient iteration has appeared in the context
of FastICA with a different justification, see e.g. [16, Equation 11 and Theorem 2]. We also note
the work [12], which develops methods for Gaussian noise-invariant ICA under the assumption that
the noise parameters are known. Finally, there are several papers that considered the problem of
performing PCA in a noisy framework. [5] gives a provably robust algorithm for PCA under a
sparse noise model. [4] performs PCA robust to white Gaussian noise, and [9] performs PCA robust
to white Gaussian noise and sparse noise.
2
Using Cumulants to Orthogonalize the Independent Components
Properties of Cumulants: Cumulants are similar to moments and can be expressed in terms of
certain polynomials of the moments. However, cumulants have additional properties which allow
independent random variables to be algebraically separated. We will be interested in the fourth order
multi-variate cumulants, and univariate cumulants of arbitrary order. Denote by Qx the fourth order
cumulant tensor for the random vector x. So, (Qx )ijkl is the cross-cumulant between the random
variables xi , xj , xk , and xl , which we alternatively denote as Cum(xi , xj , xk , xl ). Cumulant tensors
are symmetric, i.e. (Qx )ijkl is invariant under permutations of indices. Multivariate cumulants have
the following properties (written in the case of fourth order cumulants):
1. (Multilinearity) Cum(?xi , xj , xk , xl ) = ? Cum(xi , xj , xk , xl ) for random vector x and scalar ?.
If y is a random variable, then Cum(xi +y, xj , xk , xl ) = Cum(xi , xj , xk , xl )+Cum(y, xj , xk , xl ).
2. (Independence) If xi and xj are independent random variables, then Cum(xi , xj , xk , xl ) = 0.
When x and y are independent, Qx+y = Qx + Qy .
3. (Vanishing Gaussian) Cumulants of order 3 and above are zero for Gaussian random variables.
3
The first order cumulant is the mean, and the second order multivariate cumulant is the covariance
matrix. We will denote by ?r (x) the order-r univariate cumulant, which is equivalent to the crosscumulant of x with itself r times: ?r (x) := Cum(x, x, . . . , x) (where x appears r times). Univariate
r-cumulants are additive for independent random variables, i.e. ?r (x + y) = ?r (x) + ?r (y), and
homogeneous of degree r, i.e. ?r (?x) = ?r ?r (x).
Quasi-Orthogonalization Using Cumulant Tensors. Recalling our original notation, x = As +
b + ? gives the generative ICA model. We define an operation of fourth-order tensors on matrices:
For Q ? Rd?d?d?d and M ? Rd?d , Q(M ) is the matrix such that
d X
d
X
Q(M )ij :=
Qijkl mlk .
(1)
k=1 l=1
We can use this operation to orthogonalize the latent random signals.
Definition 2.1. A matrix W is called a quasi-orthogonalization matrix if there exists an orthogonal
matrix R and a nonsingular diagonal matrix D such that W A = RD.
We will need the following results from [3]. Here we use Aq to denote the q th column of A.
Lemma 2.2. Let M ? Rd?d be an arbitrary matrix. Then, Qx (M ) = ADAT where D is a
diagonal matrix with entries dqq = ?4 (sq )ATq M Aq .
Theorem 2.3. Suppose that each component of s has non-zero fourth cumulant. Let M = Qx (I),
and let C = Qx (M ?1 ). Then C = ADAT where D is a diagonal matrix with entries dqq =
1/kAq k22 . In particular, C is positive definite, and for any factorization BB T of C, B ?1 is a quasiorthogonalization matrix.
3
Quasi-Orthogonalization using Cumulant Hessians
We have seen in Theorem 2.3 a tensor-based method which can be used to quasi-orthogonalize
observed data. However, this method na??vely requires the estimation of O(d4 ) terms from data.
There is a connection between the cumulant Hessian-based techniques used in ICA [2, 11] and
the tensor-based technique for quasi-orthogonalization described in Theorem 2.3 that allows the
tensor-method to be rewritten using a series of Hessian operations. We make this connection precise
below. The Hessian version requires only O(d3 ) terms to be estimated from data and simplifies the
computation to consist of matrix and vector operations.
Let Hu denote the Hessian operator with respect to a vector u ? Rd . The following lemma connects
Hessian methods with our tensor-matrix operation (a special case is discussed in [2, Section 2.1]).
Lemma 3.1. Hu (?4 (uT x)) = ADAT where dqq = 12(uT Aq )2 ?4 (sq ).
In Lemma 3.1, the diagonal entries can be rewritten as dqq = 12?4 (sq )(ATq (uuT )Aq ). By comparing with Lemma 2.2, we see that applying Qx against a symmetric, rank one matrix uuT can be
1
rewritten in terms of the Hessian operations: Qx (uuT ) = 12
Hu (?4 (uT x)). This formula extends
to arbitrary symmetric matrices by the following Lemma.
Lemma 3.2. Let M be a symmetric matrix with eigen decomposition U ?U T such that U =
Pd
1
T
(u1 , u2 , . . . , ud ) and ? = diag(?1 , ?2 , . . . , ?d ). Then, Qx (M ) = 12
i=1 ?i Hui ?4 (ui x).
The matrices I and M ?1 in Theorem 2.3 are symmetric. As such, the tensor-based method for
quasi-orthogonalization can be rewritten using Hessian operations. This is done in Algorithm 1.
4
Gradient Iteration ICA
In the preceding sections, we discussed techniques to quasi-orthogonalize data. For this section, we will assume that quasi-orthogonalization is accomplished, and discuss deflationary approaches that can quickly recover the directions of the independent components. Let W be a quasiorthogonalization matrix. Then, define y := W x = W As + W ?. Note that since ? is Gaussian
noise, so is W ?. There exists a rotation matrix R and a diagonal matrix D such that W A = RD.
Let ?s := Ds. The coordinates of ?s are still independent random variables. Gaussian noise makes
recovering the scaling matrix D impossible. We aim to recover the rotation matrix R.
4
Algorithm 1 Hessian-based algorithm to generate a quasi-orthogonalization matrix.
1: function F IND Q UASI O RTHOGONALIZATION M ATRIX(x)
Pd
1
T
2:
Let M = 12
i=1 Hu ?4 (u x)|u=ei . See Equation (4) for the estimator.
T
3:
Let U ?U give the eigendecomposition of M ?1
Pd
4:
Let C = i=1 ?i Hu ?4 (uT x)|u=Ui . See Equation (4) for the estimator.
5:
Factorize C as BB T .
6:
return B ?1
7: end function
To see why recovery of D is impossible, we note that a white Gaussian random variable ? 1 has
independent components. It is impossible to distinguish between the case where ? 1 is part of the
signal, i.e. W A(s + ? 1 ) + W ?, and the case where A? 1 is part of the additive Gaussian noise, i.e.
W As + W (A? 1 + ?), when s, ? 1 , and ? are drawn independently. In the noise-free ICA setting, the
latent signal is typically assumed to have identity covariance, placing the scaling information in the
columns of A. The presence of additive Gaussian noise makes recovery of the scaling information
impossible since the latent signals become ill-defined. Following the idea popularized in FastICA,
we will discuss a deflationary technique to recover the columns of R one at a time.
Fast Recovery of a Single Independent Component. In the deflationary approach, a function f is
fixed that acts upon a directional vector u ? Rd . Based on some criterion (typically maximization
or minimization of f ), an iterative optimization step is performed until convergence. This technique
was popularized in FastICA, which is considered fast for the following reasons:
1. As an approximate Newton method, FastICA requires computation of ?u f and a quick-tocompute estimate of (Hu (f ))?1 at each iterative step. Due to the estimate, the computation runs in
O(N d) time, where N is the number of samples.
2. The iterative step in FastICA has local quadratic order convergence using arbitrary functions, and
global cubic-order convergence when using the fourth cumulant [13].
We note that cubic convergence rates are not unique to FastICA and have been seen using gradient
descent (with the correct step-size) when choosing f as the fourth moment [18]. Our proposed
deflationary algorithm will be comparable with FastICA in terms of computational complexity, and
the iterative step will take on a conceptually simpler form as it only relies on ?u ?r . We provide a
derivation of fast convergence rates that relies entirely on the properties of cumulants. As cumulants
are invariant with respect to the additive Gaussian noise, the proposed methods will be admissible
for both standard and noisy ICA.
While cumulants are essentially unique with the additivity and homogeneity properties [17] when
no restrictions are made on the probability space, the preprocessing step of ICA gives additional
structure (like orthogonality and centering), providing additional admissible functions. In particular,
[20] designs ?robust cumulants? which are only minimally effected by sparse noise. Welling?s robust
cumulants have versions of the additivity and homogeneity properties, and are consistent with our
update step. For this reason, we will state our results in greater generality.
Let G be a function of univariate random variables that satisfies the additivity, degree-r (r ? 3)
homogeneity, and (for the noisy case) the vanishing Gaussians properties of cumulants. Then for a
generic choice of input vector v, Algorithm 2 will demonstrate order r?1 convergence. In particular,
if G is ?3 , then we obtain quadratic convergence; and if G is ?4 , we obtain cubic convergence.
Lemma 4.1 helps explain why this is true.
Pd
Lemma 4.1. ?v G(v ? y) = r i=1 (v ? Ri )r?1 G(?
si )Ri .
If we consider what is happening in the basis of the columns of R, then up to some multiplicative
constant, each coordinate is raised to the r ? 1 power and then renormalized during each step of
Algorithm 2. This ultimately leads to the order r ? 1 convergence.
Theorem 4.2. If for a unit vector input v to Algorithm 2 h = arg maxi |(v ? Ri )r?2 G(?
si )| has a
unique answer, then v has order r ? 1 convergence to Rh up to sign. In particular, if the following
conditions are met: (1) There exists a coordinate random variable si of s such that G(si ) 6= 0. (2) v
inputted into Algorithm 2 is chosen uniformly at random from the unit sphere S d?1 . Then Algorithm
2 converges to a column of R (up to sign) almost surely, and convergence is of order r ? 1.
5
Algorithm 2 A fast algorithm to recover a single column of R when v is drawn generically from
the unit sphere. Equations (2) and (3) provide k-statistic based estimates of ?v ?3 and ?v ?4 , which
can be used as practical choices of ?v G on real data.
1: function GI-ICA(v, y)
2:
repeat
3:
v ? ?v G(vT y)
4:
v ? v/kvk2
5:
until Convergence return v
6: end function
Algorithm 3 Algorithm for ICA in the presence of Gaussian noise. A? recovers A up to column
order and scaling. RT W is the demixing matrix for the observed random vector x.
function G AUSSIAN ROBUST ICA(G, x)
W = F IND Q UASI O RTHOGONALIZATION M ATRIX(x)
y = Wx
R columns = ?
for i = 1 to d do
Draw v from S d?1 ? span(R columns)? uniformly at random.
R columns = R columns ? {GI-ICA(v, y)}
end for
Construct a matrix R using the elements of R columns as columns.
?s = RT y
A? = (RT W )?1
? ?s
return A,
end function
By convergence up to sign, we include the possibility that v oscillates between Rh and ?Rh on
alternating steps. This can occur if G(?
si ) < 0 and r is odd. Due to space limitations, the proof is
omitted.
Recovering all Independent Components. As a Corollary to Theorem 4.2 we get:
Corollary 4.3. Suppose R1 , R2 , . . . , Rk are known for some k < d. Suppose there exists i > k
such that G(si ) 6= 0. If v is drawn uniformly at random from S d?1 ? span(R1 , . . . , Rk )? where
S d?1 denotes the unit sphere in Rd , then Algorithm 2 with input v converges to a new column of R
almost surely.
Since the indexing of R is arbitrary, Corollary 4.3 gives a solution to noisy ICA, in Algorithm
3. In practice (not required by the theory), it may be better to enforce orthogonality between the
columns of R, by orthogonalizing v against previously found columns of R at the end of each step
in Algorithm 2. We expect the fourth or third cumulant function will typically be chosen for G.
5
Time Complexity Analysis and Estimation of Cumulants
To implement Algorithms 1 and 2 requires the estimation of functions from data. We will limit
our discussion to estimation of the third and fourth cumulants, as lower order cumulants are more
statistically stable to estimate than higher order cumulants. ?3 is useful in Algorithm 2 for nonsymmetric distributions. However, since ?3 (si ) = 0 whenever si is a symmetric distribution, it is
plausible that ?3 would not recover all columns of R. When s is suspected of being symmetric, it
is prudent to use ?4 for G. Alternatively, one can fall back to ?4 from ?3 when ?3 is detected to be
near 0.
Denote by z (1) , z (2) , . . . , z (N ) the observed samples of a random variable z. Given a sample, each
cumulant can be estimated in an unbiased fashion by its k-statistic. Denote by kr (z (i) ) the kPN
statistic sample estimate of ?r (z). Letting mr (z (i) ) := N1 i=1 (z (i) ? z?)r give the rth sample
central moment, then
N 2 m3 (z (i) )
(N + 1)m4 (z (i) ) ? 3(N ? 1)m2 (z (i) )2
k3 (z (i) ) :=
, k4 (z (i) ) := N 2
(N ? 1)(N ? 2)
(N ? 1)(N ? 2)(N ? 3)
6
gives the third and fourth k-statistics [15]. However, we are interested in estimating the gradients (for
Algorithm 2) and Hessians (for Algorithm 1) of the cumulants rather than the cumulants themselves.
The following Lemma shows how to obtain unbiased estimates:
Lemma 5.1. Let z be a d-dimensional random vector with finite moments up to order r. Let z(i) be
an iid sample of z. Let ? ? Nd be a multi-index. Then ?u? kr (u ? z(i) ) is an unbiased estimate for
?u? ?r (u ? z).
If we mean-subtract (via the sample mean) all observed random variables, then the resulting estimates are:
N
X
?u k3 (u ? y) = (N ? 1)?1 (N ? 2)?1 3N
(u ? y(i) )2 y(i)
(2)
i=1
2
N
?u k4 (u ? y) =
(N ? 1)(N ? 2)(N ? 3)
?12
N ?1
N2
(
N
N +1 X
4
((u ? y(i) ))3 y(i)
N
i=1
! N
!)
N
X
X
(u ? y(i) )y(i)
(u ? y(i) )2
i=1
12N 2
Hu k4 (u ? x) =
(N ? 1)(N ? 2)(N ? 3)
!
(3)
i=1
(
N
N +1X
((u ? x(i) ))2 (xxT )(i)
(4)
N i=1
! N
!T ?
N
N
N
?
X
X
X
N ?1X
2N
?
2
(i) 2
T (i)
(i) (i)
(i) (i)
?
(u
?
x
)
(xx
)
?
(u
?
x
)x
(u
?
x
)x
?
N 2 i=1
N2
i=1
i=1
i=1
Using (4) to estimate Hu ?4 (uT x) from data when implementing Algorithm 1, the resulting quasiorthogonalization algorithm runs in O(N d3 ) time. Using (2) or (3) to estimate ?u G(vT y) (with G
chosen to be ?3 or ?4 respectively) when implementing Algorithm 2 gives an update step that runs
in O(N d) time. If t bounds the number of iterations to convergence in Algorithm 2, then O(N d2 t)
steps are required to recover all columns of R once quasi-orthogonalization has been achieved.
6
Simulation Results
In Figure 1, we compare our algorithms to the baselines JADE [7] and versions of FastICA [10],
using the code made available by the authors. Except for the choice of the contrast function for
FastICA the baselines were run using default settings. All tests were done using artificially generated
data. In implementing our algorithms (available at [19]), we opted to enforce orthogonality during
the update step of Algorithm 2 with previously found columns of R. In Figure 1, comparison on
five distributions indicates that each of the independent coordinates was generated from a distinct
distribution among the Laplace distribution, the Bernoulli distribution with parameter 0.5, the tdistribution with 5 degrees of freedom, the exponential distribution, and the continuous uniform
distribution. Most of these distributions are symmetric, making GI-?3 inadmissible.
When generating data for the ICA algorithm, we generate a random mixing matrix A with condition
number 10 (minimum singular value 1 and maximum singular value 10), and intermediate singular
values chosen uniformly at random. The noise magnitude indicates the strength of an additive white
Gaussian noise. We define 100% noise magnitude to mean variance 10, with 25% noise and 50%
noise indicating variances 2.5 and 5 respectively. Performance was measured using the Amari Index
? denote the approximate demixing matrix returned by an ICA algorithm,
introduced in [1]. Let B
Pn Pn |m |
?
and let M = BA.
Then, the Amari index is given by: E := i=1 j=1 maxk ij
?
1
+
|mik |
Pn Pn |mij |
j=1
i=1 maxk |mkj | ? 1 . The Amari index takes on values between 0 and the dimensionality
d. It can be roughly viewed as the distance of M from the nearest scaled permutation matrix P D
(where P is a permutation matrix and D is a diagonal matrix).
From the noiseles data, we see that quasi-orthogonalization requires more data than whitening in
order to provide accurate results. Once sufficient data is provided, all fourth order methods (GI-?4 ,
JADE, and ?4 -FastICA) perform comparably. The difference between GI-?4 and ?4 -FastICA is not
7
ICA Comparison on 5 distributions (d=5, noisless data)
ICA Comparison on 5 distributions (d=5, 25% noise magnitude)
1.00
GI??4 (white)
GI??4 (quasi?orthogonal)
GI??4 (quasi?orthogonal)
?4?FastICA
?4?FastICA
?4?FastICA
log cosh?FastICA
JADE
log cosh?FastICA
JADE
log cosh?FastICA
JADE
0.10
0.01
100
1000
10000
100000
Number of Samples
ICA Comparison on 5 distributions (d=10, noisless data)
10.00
Amari Index
GI??4 (white)
GI??4 (quasi?orthogonal)
0.10
10.00
GI??4 (white)
GI??4 (quasi?orthogonal)
GI??4 (quasi?orthogonal)
?4?FastICA
?4?FastICA
log cosh?FastICA
JADE
log cosh?FastICA
JADE
1.00
Amari Index
Amari Index
GI??4 (white)
GI??4 (quasi?orthogonal)
0.10
1000
10000
Number of Samples
1000
10000
100000
Number of Samples
ICA Comparison on 5 distributions (d=10, 50% noise magnitude)
GI??4 (white)
0.10
0.01
100
0.01
100
1000
10000
100000
Number of Samples
ICA Comparison on 5 distributions (d=10, 25% noise magnitude)
10.00
1.00
0.10
100000
?4?FastICA
log cosh?FastICA
JADE
1.00
Amari Index
0.01
100
ICA Comparison on 5 distributions (d=5, 50% noise magnitude)
1.00
GI??4 (white)
Amari Index
Amari Index
1.00
0.01
100
0.10
1000
10000
Number of Samples
100000
0.01
100
1000
10000
Number of Samples
100000
Figure 1: Comparison of ICA algorithms under various levels of noise. White and quasi-orthogonal
refer to the choice of the first step of ICA. All baseline algorithms use whitening. Reported Amari
indices denote the mean Amari index over 50 runs on different draws of both A and the data. d gives
the data dimensionality, with two copies of each distribution used when d = 10.
statistically significant over 50 runs with 100 000 samples. We note that GI-?4 under whitening and
?4 -FastICA have the same update step (up to a slightly different choice of estimators), with GI-?4
differing to allow for quasi-orthogonalization. Where provided, the error bars give a 2? confidence
interval on the mean Amari index. In all cases, error bars for our algorithms are provided, and error
bars for the baseline algorithms are provided when they do not hinder readability.
It is clear that all algorithms degrade with the addition of Gaussian noise. However, GI-?4 under quasi-orthogonalization degrades far less when given sufficient samples. For this reason, the
quasi-orthogonalized GI-?4 outperforms all other algorithms (given sufficient samples) including
the log cosh-FastICA, which performs best in the noiseless case. Contrasting the performance of GI?4 under whitening with itself under quasi-orthogonalization, it is clear that quasi-orthogonalization
is necessary to be robust to Gaussian noise.
Run times were indeed reasonably fast. For 100 000 samples on the varied distributions (d = 5) with
50% Gaussian noise magnitude, GI-?4 (including the orthogonalization step) had an average running
time2 of 0.19 seconds using PCA whitening, and 0.23 seconds under quasi-orthogonalization. The
corresponding average number of iterations to convergence per independent component (at 0.0001
error) were 4.16 and 4.08. In the following table, we report the mean number of steps to convergence
(per independent component) over the 50 runs for the 50% noise distribution (d = 5), and note that
once sufficiently many samples were taken, the number of steps to convergence becomes remarkably
small.
Number of data pts
whitening+GI-?4 : mean num steps
quasi-orth.+GI-?4 : mean num steps
7
500
11.76
213.92
1000
5.92
65.95
5000
4.99
4.48
10000
4.59
4.36
Acknowledgments
This work was supported by NSF grant IIS 1117707.
2
Using a standard desktop with an i7-2600 3.4 GHz CPU and 16 GB RAM.
8
50000
4.35
4.06
100000
4.16
4.08
References
[1] S. Amari, A. Cichocki, H. H. Yang, et al. A new learning algorithm for blind signal separation.
Advances in neural information processing systems, pages 757?763, 1996.
[2] S. Arora, R. Ge, A. Moitra, and S. Sachdeva. Provable ICA with unknown Gaussian noise,
with implications for Gaussian mixtures and autoencoders. In NIPS, pages 2384?2392, 2012.
[3] M. Belkin, L. Rademacher, and J. Voss. Blind signal separation in the presence of Gaussian
noise. In JMLR W&CP, volume 30: COLT, pages 270?287, 2013.
[4] C. M. Bishop. Variational principal components. Proc. Ninth Int. Conf. on Articial Neural
Networks. ICANN, 1:509?514, 1999.
[5] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? CoRR,
abs/0912.3599, 2009.
[6] J. Cardoso and A. Souloumiac. Blind beamforming for non-Gaussian signals. In Radar and
Signal Processing, IEE Proceedings F, volume 140, pages 362?370. IET, 1993.
[7] J.-F. Cardoso and A. Souloumiac. Matlab JADE for real-valued data v 1.8. http://
perso.telecom-paristech.fr/?cardoso/Algo/Jade/jadeR.m, 2005. [Online; accessed 8-May-2013].
[8] P. Comon and C. Jutten, editors. Handbook of Blind Source Separation. Academic Press, 2010.
[9] X. Ding, L. He, and L. Carin. Bayesian robust principal component analysis. Image Processing, IEEE Transactions on, 20(12):3419?3430, 2011.
[10] H. G?avert, J. Hurri, J. S?arel?a, and A. Hyv?arinen. Matlab FastICA v 2.5. http://
research.ics.aalto.fi/ica/fastica/code/dlcode.shtml, 2005. [Online;
accessed 1-May-2013].
[11] D. Hsu and S. M. Kakade. Learning mixtures of spherical Gaussians: Moment methods and
spectral decompositions. In ITCS, pages 11?20, 2013.
[12] A. Hyv?arinen. Independent component analysis in the presence of Gaussian noise by maximizing joint likelihood. Neurocomputing, 22(1-3):49?67, 1998.
[13] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis.
IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[14] A. Hyv?arinen and E. Oja. Independent component analysis: Algorithms and applications.
Neural Networks, 13(4-5):411?430, 2000.
[15] J. F. Kenney and E. S. Keeping. Mathematics of Statistics, part 2. van Nostrand, 1962.
[16] H. Li and T. Adali. A class of complex ICA algorithms based on the kurtosis cost function.
IEEE Transactions on Neural Networks, 19(3):408?420, 2008.
[17] L. Mafttner. What are cumulants. Documenta Mathematica, 4:601?622, 1999.
[18] P. Q. Nguyen and O. Regev. Learning a parallelepiped: Cryptanalysis of GGH and NTRU
signatures. J. Cryptology, 22(2):139?160, 2009.
[19] J. Voss, L. Rademacher, and M. Belkin. Matlab GI-ICA implementation.
sourceforge.net/projects/giica/, 2013. [Online].
http://
[20] M. Welling. Robust higher order statistics. In Tenth International Workshop on Artificial
Intelligence and Statistics, pages 405?412, 2005.
[21] A. Yeredor. Blind source separation via the second characteristic function. Signal Processing,
80(5):897?902, 2000.
[22] V. Zarzoso and P. Comon. How fast is FastICA. EUSIPCO, 2006.
9
| 5134 |@word briefly:1 version:5 polynomial:1 nd:1 d2:2 hu:8 simulation:1 hyv:4 covariance:6 decomposition:2 decorrelate:1 mlk:1 atrix:2 moment:7 series:1 interestingly:1 outperforms:1 existing:1 comparing:1 surprising:1 si:8 written:1 luis:1 additive:6 numerical:1 subsequent:1 wx:1 update:5 generative:1 intelligence:1 desktop:2 xk:8 vanishing:2 deflationary:4 num:2 provides:1 cse:3 readability:1 simpler:2 accessed:2 five:1 kvk2:1 direct:1 become:1 quasiorthogonalization:3 consists:1 indeed:1 kenney:1 ica:44 themselves:1 cand:1 yeredor:1 voss:3 multi:2 roughly:1 spherical:1 cpu:1 becomes:1 confused:1 estimating:3 underlying:4 project:2 provided:5 notation:1 xx:1 what:2 contrasting:1 differing:1 transformation:1 guarantee:1 act:1 shed:1 oscillates:1 scaled:1 unit:4 grant:1 producing:1 arguably:1 positive:1 engineering:3 local:1 limit:1 eusipco:1 minimally:1 challenging:1 factorization:1 range:1 statistically:2 practical:7 unique:3 acknowledgment:1 practice:4 implement:2 definite:1 sq:3 procedure:2 parallelepiped:2 significantly:1 projection:2 confidence:1 refers:1 get:1 cannot:1 mkj:1 operator:1 context:1 applying:2 impossible:4 restriction:1 equivalent:1 quick:1 maximizing:1 independently:2 giica:3 simplicity:1 recovery:4 m2:1 estimator:4 oh:3 time2:1 dqq:4 coordinate:5 justification:1 laplace:1 pt:1 suppose:3 exact:1 homogeneous:1 us:1 element:1 orthogonalizes:1 observed:8 ding:1 pd:4 complexity:5 ui:2 hinder:1 renormalized:1 ultimately:1 radar:1 signature:1 algo:1 upon:1 basis:1 joint:1 various:2 xxt:1 derivation:1 additivity:3 separated:1 distinct:1 fast:14 detected:1 artificial:1 jade:12 choosing:1 quite:1 widely:1 plausible:1 valued:1 amari:13 statistic:7 neil:3 gi:31 noisy:7 itself:2 online:3 net:2 kurtosis:1 propose:1 fr:1 relevant:1 mixing:3 achieve:1 sourceforge:2 convergence:24 r1:2 rademacher:3 generating:2 converges:2 help:1 derive:1 develop:4 cryptology:1 measured:1 nearest:1 ij:2 odd:1 recovering:3 implemented:1 met:1 differ:1 direction:1 perso:1 closely:2 correct:1 centered:1 implementing:3 arinen:4 generalization:1 extension:1 practically:1 sufficiently:1 considered:2 wright:1 ic:1 k3:2 algorithmic:1 omitted:1 inputted:1 estimation:4 proc:1 multilinearity:1 superposition:2 create:1 minimization:1 gaussian:29 aim:1 rather:1 avoid:1 pn:4 shtml:1 corollary:3 derived:1 rank:2 indicates:2 bernoulli:1 aalto:1 likelihood:1 contrast:1 opted:1 baseline:4 mbelkin:1 detect:1 typically:5 qijkl:1 quasi:41 interested:2 provably:2 arg:1 among:1 ill:1 colt:1 prudent:1 development:1 raised:1 special:3 construct:1 once:3 articial:1 placing:1 broad:1 carin:1 report:1 others:1 contaminated:1 simplify:1 develops:1 belkin:3 primarily:1 oja:1 simultaneously:2 comprehensive:1 individual:2 homogeneity:3 m4:1 neurocomputing:1 connects:1 n1:1 recalling:1 freedom:1 ab:1 possibility:1 generically:1 mixture:2 light:1 implication:1 accurate:1 necessary:1 orthogonal:10 vely:1 orthogonalized:3 theoretical:3 column:21 modeling:1 cumulants:29 maximization:1 applicability:2 introducing:1 addressing:1 entry:3 cost:1 uniform:1 fastica:30 iee:1 reported:1 answer:1 international:1 randomized:1 quickly:2 na:1 central:1 moitra:1 containing:1 conf:1 book:1 style:2 rescaling:2 return:3 li:2 int:1 rescales:1 blind:7 later:1 performed:1 multiplicative:1 lab:3 competitive:2 recover:11 effected:1 contribution:1 square:1 variance:2 characteristic:1 nonsingular:1 directional:2 conceptually:1 avert:1 bayesian:1 itcs:1 ijkl:2 comparably:1 iid:1 explain:1 whenever:1 definition:2 centering:1 failure:1 against:2 mathematica:1 james:1 proof:1 recovers:3 hsu:1 knowledge:1 ut:5 dimensionality:3 organized:1 back:1 appears:1 higher:2 follow:1 done:3 generality:1 until:2 d:1 autoencoders:1 ei:1 jutten:1 columbus:3 name:1 k22:1 true:1 unbiased:3 hence:1 alternating:1 symmetric:8 white:14 ind:2 during:2 speaker:3 whiten:1 d4:2 criterion:1 complete:1 demonstrate:1 confusion:1 performs:3 cp:1 orthogonalization:29 image:1 variational:1 ohio:6 fi:1 common:1 superior:2 rotation:2 volume:2 discussed:4 nonsymmetric:1 he:1 rth:1 significant:2 refer:1 rd:10 mathematics:1 aq:4 had:1 stable:1 longer:2 whitening:13 multivariate:2 recent:1 certain:2 nostrand:1 vt:2 accomplished:1 seen:4 minimum:1 additional:3 somewhat:1 preceding:1 greater:1 mr:1 surely:2 algebraically:1 ud:1 signal:24 ii:1 full:3 academic:1 cross:1 sphere:3 impact:1 variant:1 basic:1 whitened:3 noiseless:6 essentially:1 iteration:11 achieved:2 qy:1 justified:1 addition:3 remarkably:1 interval:1 singular:3 leaving:1 source:2 beamforming:1 structural:1 near:1 presence:4 yang:1 intermediate:1 variety:1 xj:9 variate:1 independence:1 reduce:1 idea:2 simplifies:1 avenue:3 i7:1 pca:10 gb:1 returned:1 speech:1 hessian:15 speaking:1 matlab:4 cocktail:2 useful:1 clear:2 cardoso:3 cosh:7 simplest:1 http:4 generate:2 nsf:1 sign:4 deteriorates:1 estimated:2 per:2 affected:1 four:2 demonstrating:2 drawn:5 d3:4 k4:3 clean:1 tenth:1 ram:1 run:8 fourth:24 extends:1 almost:2 separation:6 draw:2 atq:2 scaling:4 comparable:1 capturing:1 entirely:1 bound:1 simplification:1 distinguish:1 quadratic:2 strength:1 occur:1 orthogonality:3 ri:3 u1:1 span:2 performing:1 relatively:1 according:1 popularized:2 slightly:1 kakade:1 kpn:1 making:1 comon:2 invariant:8 indexing:1 taken:1 computationally:2 equation:5 previously:2 describing:1 turn:1 discus:9 letting:1 ge:1 end:5 pursuit:2 available:3 operation:7 rewritten:4 gaussians:2 observe:2 appropriate:2 generic:1 enforce:2 sachdeva:1 spectral:1 voice:2 eigen:1 original:1 denotes:1 running:1 include:1 adat:3 newton:1 giving:1 restrictive:1 tensor:13 objective:2 degrades:1 regev:1 dependence:1 usual:2 diagonal:6 rt:3 aussian:1 kaq:1 gradient:13 distance:1 degrade:1 arel:1 reason:3 provable:1 code:2 index:14 providing:1 stated:1 ba:1 implementation:4 design:1 cryptographic:1 unknown:5 perform:2 allowing:2 implementable:1 finite:1 descent:3 situation:2 extended:1 maxk:2 precise:1 varied:1 ninth:1 arbitrary:6 makeup:1 download:1 community:1 introduced:3 required:5 extensive:1 connection:4 conflict:1 nip:1 bar:3 below:1 appeared:1 including:2 power:1 decorrelation:1 force:1 arora:1 cichocki:1 prior:1 literature:1 fully:1 expect:1 permutation:3 mixed:1 prototypical:1 interesting:1 limitation:1 eigendecomposition:1 degree:3 vectorized:1 consistent:1 sufficient:3 suspected:1 editor:1 compatible:4 repeat:1 supported:1 free:1 copy:1 keeping:1 allow:2 mik:1 fall:1 mikhail:1 sparse:4 distributed:1 ghz:1 van:1 dimension:2 default:1 souloumiac:2 author:2 made:3 commonly:1 preprocessing:1 nguyen:1 party:2 far:1 welling:2 qx:11 bb:2 transaction:3 approximate:3 ignore:1 dealing:1 global:1 handbook:1 conceptual:1 assumed:2 hurri:1 xi:8 factorize:1 alternatively:2 continuous:1 latent:11 iterative:4 iet:1 why:2 table:1 learn:1 reasonably:1 robust:14 necessarily:1 artificially:1 complex:1 diag:1 icann:1 main:1 cum:8 rh:3 noise:47 n2:2 telecom:1 cubic:6 fashion:1 formalization:1 orth:1 exponential:1 xl:8 jmlr:1 third:3 admissible:2 theorem:7 formula:1 rk:2 bishop:1 showing:2 uut:3 maxi:1 r2:1 tdistribution:1 demixing:2 exists:4 consist:1 workshop:1 corr:1 kr:2 hui:1 orthogonalizing:3 magnitude:7 subtract:1 likely:1 univariate:6 happening:1 expressed:1 partially:2 scalar:2 u2:1 mij:1 satisfies:1 relies:2 ma:1 identity:5 goal:2 viewed:3 presentation:1 room:1 paristech:1 except:1 reducing:1 uniformly:4 principal:5 microphone:2 called:3 total:1 lemma:11 inadmissible:1 experimental:2 orthogonalize:4 m3:1 e:1 indicating:1 people:1 cumulant:28 adali:1 |
4,570 | 5,135 | Online PCA for Contaminated Data
Jiashi Feng
ECE Department
National University of Singapore
[email protected]
Huan Xu
ME Department
National University of Singapore
[email protected]
Shie Mannor
EE Department
Technion
[email protected]
Shuicheng Yan
ECE Department
National University of Singapore
[email protected]
Abstract
We consider the online Principal Component Analysis (PCA) where contaminated
samples (containing outliers) are revealed sequentially to the Principal Components (PCs) estimator. Due to their sensitiveness to outliers, previous online PCA
algorithms fail in this case and their results can be arbitrarily skewed by the outliers. Here we propose the online robust PCA algorithm, which is able to improve the PCs estimation upon an initial one steadily, even when faced with a
constant fraction of outliers. We show that the final result of the proposed online
RPCA has an acceptable degradation from the optimum. Actually, under mild
conditions, online RPCA achieves the maximal robustness with a 50% breakdown
point. Moreover, online RPCA is shown to be efficient for both storage and computation, since it need not re-explore the previous samples as in traditional robust
PCA algorithms. This endows online RPCA with scalability for large scale data.
1
Introduction
In this paper, we investigate the problem of robust Principal Component Analysis (PCA) in an online
fashion. PCA aims to construct a low-dimensional subspace based on a set of principal components
(PCs) to approximate all the observed samples in the least-square sense [19]. Conventionally, it
computes PCs as the eigenvectors of the sample covariance matrix in batch mode, which is both
computationally expensive and in particular memory exhausting, when dealing with large scale data.
To address this problem, several online PCA algorithms have been developed in literature [15, 23,
10]. For online PCA, at each time instance, a new sample is revealed, and the PCs estimation is
updated accordingly without having to re-explore all previous samples. Significant advantages of
online PCA algorithms include independence of their storage space requirement of the number of
samples, and handling newly revealed samples quite efficiently.
Due to the quadratic loss used, PCA is notoriously sensitive to corrupted observations (outliers),
and the quality of its output can suffer severely in the face of even a few outliers. Therefore, much
work has been dedicated to robustifying PCA [12, 2, 24, 6]. However, all of these methods work
in batch mode and cannot handle sequentially revealed samples in the online learning framework.
For instance, [24] proposed a high-dimensional robust PCA (HR-PCA) algorithm that is based on
iterative performing PCA and randomized removal. Notice that the random removal process involves
calculating the order statistics over all the samples to obtain the removal probability. Therefore, all
samples must be stored in memory throughout the process. This hinders its application to large scale
data, for which storing all data is impractical.
1
In this work, we propose a novel online Robust PCA algorithm to handle contaminated sample set,
i.e., sample set that comprises both authentic samples (non-corrupted samples) and outliers (corrupted samples), which are revealed sequentially to the algorithm. Previous online PCA algorithms
generally fail in this case, since they update the PCs estimation through minimizing the quadratic
error w.r.t. every new sample and are thus sensitive to outliers. The outliers may manipulate the PCs
estimation severely and the result can be arbitrarily bad. In contrast, the proposed online RPCA is
shown to be robust to the outliers. This is achieved by a probabilistic admiting/rejection procedure
when a new sample comes. This is different from previous online PCA methods, where each and
every new sample is admitted. The probabilistic admittion/rejection procedure endows online RPCA
with the ability to reject more outliers than authentic samples and thus alleviates the affect of outliers
and robustifies the PCs estimation. Indeed, we show that given a proper initial estimation, online
RPCA is able to steadily improve its output until convergence. We further bound the deviation of the
final output from the optimal solution. In fact, under mild conditions, online RPCA can be resistent
to 50% outliers, namely having a 50% breakdown point. This is the maximal robustness that can be
achieved by any method.
Compared with previous robust PCA methods (typically works in batch mode), online RPCA only
needs to maintain a covariance matrix whose size is independent of the number of data points. Upon
accepting a newly revealed sample, online RPCA updates the PCs estimation accordingly without
re-exploring the previous samples. Thus, online RPCA can deal with large amounts of data with
low storage expense. This is in stark contrast with previous robust PCA methods which typically
requires to remember all samples. To the best of our knowledge, this is the first attempt to make
online PCA work for outlier-corrupted data, with theoretical performance guarantees.
2
Related Work
Standard PCA is performed in batch mode, and its high computational complexity may become
cumbersome for the large datasets. To address this issue, different online learning techniques have
been proposed, for example [1, 8], and many others.
Most of current online PCA methods perform the PCs estimation in an incremental manner [8, 18,
25]. They maintain a covariance matrix or current PCs estimation, and update it according to the
new sample incrementally. Those methods provide similar PCs estimation accuracy. Recently, a
randomized online PCA algorithm was proposed by [23], whose objective is to minimize the total
expected quadratic error minus the total error of the batch algorithm (i.e., the regret). However, none
of these online PCA algorithms is robust to the outliers.
To overcome the sensitiveness of PCA to outliers, many robust PCA algorithms have been proposed [21, 4, 12], which can be roughly categorized into two groups. They either pursue robust
estimation of the covariance matrix, e.g., M -estimator [17], S-estimator [22], and Minimum Covariance Determinant (MCD) estimator [21], or directly maximize certain robust estimation of univariate variance for the projected observations [14, 3, 4, 13]. These algorithms inherit the robustness
characteristics of the adopted estimators and are qualitatively robust. However, none of them can
be directly applied in online learning setting. Recently, [24] and the following work [6] propose
high-dimensional robust PCA, which can achieve maximum 50% breakdown point. However, these
methods iteratively remove the observations or tunes the observations weights based on statistics
obtained from the whole data set. Thus, when a new data point is revealed, these methods need to
re-explore all of the data and become quite expensive in computation and in storage.
The most related works to ours are the following two works. In [15], an incremental and robust
subspace learning method is proposed. The method proposes to integrate the M -estimation into the
standard incremental PCA calculation. Specifically, each newly coming data point is re-weighted by
a pre-defined influence function [11] of its residual to the current estimated subspace. However, no
performance guarantee is provided in this work. Moreover, the performance of the proposed algorithm relies on the accuracy of PCs obtained previously. And the error will be cumulated inevitably.
Recently, a compressive sensing based recursive robust PCA algorithm was proposed in [20]. In this
work, the authors focused on the case where the outliers can be modeled as sparse vectors. In contrast, we do not impose any structural assumption on the outliers. Moreover, the proposed method
in [20] essentially solves compressive sensing optimization over a small batch of data to update the
PCs estimation instead of using a single sample, and it is not clear how to extend the method to the
2
latter case. Recently, He et al. propose an incremental gradient descent method on Grassmannian
manifold for solving the robust PCA problem, named GRASTA [9]. However, they also focus on a
different case from ours where the outliers are sparse vectors.
3
3.1
The Algorithm
Problem Setup
Given a set of observations {y1 , ? ? ? , yT } (here T can be finite or infinite) which are revealed sequentially, the goal of online PCA is to estimate and update the principal components (PCs) based on
the newly revealed sample yt at time instance t. Here, the observations are the mixture of authentic
samples (non-corrupted samples) and outliers (corrupted samples). The authentic samples zi ? Rp
are generated through a linear mapping: zi = Axi + ni . Noise ni is sampled from normal distribution N (0, Ip ); and the signal xi ? Rd are i.i.d. samples of a random variable x with mean zero and
variance Id . Let ? denote the distribution of x. The matrix A ? Rp?d and the distribution ? are unknown. We assume ? is absolutely continuous w.r.t. the Borel measure and spherically symmetric.
?
And ? has light tails, i.e., there exist constants C > 0 such that Pr(kxk ? x) ? d exp(1?Cx/? d)
for all x ? 0. The outliers are denoted as oi ? Rp and in particular they are defined as follows.
Definition 1 (Outlier). A sample oi ? Rp is an outlier w.r.t. the subspace spanned by {wj }dj=1 if it
Pd
deviates from the subspace, i.e., j=1 |wTj oi |2 ? ?o .
In the above definition, we assume that the basis wj and outliers o are both normalized (see Algorithm 1 step 1)-a) where all the samples are `2 -normalized). Thus, we directly use inner product
to define ?o . Namely a sample is called outlier if it is distant from the underlying subspace of the
signal. Apart from this assumption, the outliers are arbitrary. In this work, we are interested in the
case where the outliers are mixed with authentic samples uniformly in the data stream, i.e., taking
any subset of the dataset, the outlier fraction is identical when the size of the subset is large enough.
The input to the proposed online RPCA algorithm is the sequence of observations Y =
{y1 , y2 , ? ? ? , yT }, which is the union of authentic samples Z = {zi } generated by the aforementioned linear model and outliers O = {oi }. The outlier fraction in the observations is denoted as
?. Online RPCA aims at learning the PCs robustly and the learning process proceeds in time in(t)
stances. At the time instance t, online RPCA chooses a set of principal components {wj }dj=1 . The
performance of the estimation is measured by the Expressed Variance (E.V.) [24]:
(t)
(t) T
AAT wj
j=1 wj
.
Pd
T
T
j=1 wj AA wj
Pd
E.V. ,
Here, wj denotes the true principal components of matrix A. The E.V. represents the portion of
(t)
signal Ax being expressed by {wj }dj=1 . Thus, 1 ? E.V. is the reconstruction error of the signal.
The E.V. is a commonly used evaluation metric for the PCA algorithms [24, 5]. It is always less than
or equal to one, with equality achieved by a perfect recovery.
3.2
Online Robust PCA Algorithm
The details of the proposed online RPCA algorithm are shown in Algorithm 1. In the algorithm,
the observation sequence Y = {y1 , y2 , ? ? ? , yT } is sequentially partitioned into (T 0 + 1) batches
{B0 , B1 , B2 , . . . , BT 0 }. Each batch consists of b observations. Since the authentic samples and
outliers are mixed uniformly, the outlier fraction in each batch is also ?. Namely, in each batch Bi ,
there are (1 ? ?)b authentic samples and ?b outliers.
Note that such small batch partition is only for the ease of illustration and analysis. Since the
algorithm only involves standard PCA computation, we can employ any incremental or online PCA
method [8, 15] to update the PCs estimation upon accepting a new sample. The maintained sample
covariance matrix, can be set to zero every b time instances. Thus the batch partition is by no means
necessary in practical implementation. In the algorithm, the initial PC estimation can be obtained
through standard PCA or robust PCA [24] on a mini batch of the samples.
3
Algorithm 1 Online Robust PCA Algorithm
Input: Data sequence {y1 , . . . , yT }, buffer size b.
Initialization: Partition the data sequence into small batches {B0 , B1 , . . . , BT 0 }. Each patch
contains b data points. Perform PCA on the first batch B0 and obtain the initial principal compo(0)
nent {wj }dj=1 .
(0)
t = 1. wj? = wj , ?j = 1, . . . , d.
while t ? T 0 do
1) Initialize the sample covariance matrix: C (t) = 0.
for i = 1 to b do
(t)
(t)
(t)
a) Normalize the data point by its `2 -norm: yi := yi /kyi k`2 .
Pd (t?1) T (t) 2
(t)
(t?1)
b) Calculate the variance of yi along the direction w
: ?i = j=1 wj
yi .
(t)
c) Accept yi with probability ?i .
(t)
(t)
(t) ?
d) Scale yi as yi ? yi /b ?i .
(t) (t) T
(t)
e) If yi is accepted, update C (t) ? C (t) + yi yi .
end for
(t)
2) Perform eigen-decomposition on Ct and obtain the leading d eigenvector {wj }dj=1 .
(t)
3) Update the PC as wj? = wj , ?j = 1, . . . , d.
4) t := t + 1.
end while
Return w? .
We now explain the intuition of the proposed online RPCA algorithm. Given an initial solution
w(0) which is ?closer? to the true PC directions than to the outlier direction 1 , the authentic samples
will have larger variance along the current PC direction than outliers. Thus in the probabilistic data
selection process (as shown in Algorithm 1 step b) to step d)), authentic samples are more likely
to be accepted than outliers. Here the step d) of scaling the samples is important for obtaining
an unbiased estimator (see details in the proof of Lemma 4 in supplementary material and [16]).
Therefore, in the following PC updating based on standard PCA on the accepted data, authentic
samples will contribute more than the outliers. The estimated PCs will be ?moved? towards to the
true PCs gradually. Such process is repeated until convergence.
4
Main Results
In this section we present the theoretical performance guarantee of the proposed online RPCA al(t)
gorithm (Algorithm 1). In the sequel, wj is the solution at the t-th time instance. Here without
loss of generality we assume the matrix A is normalized, such that the E.V. of the true princiPd
pal component wj is j=1 wTj AT Awj = 1. The following theorem provides the performance
guarantee of Algorithm 1 under the noisy case. The performance of w(t) can be measured by
Pd
T
H(w(t) ) , j=1 kw(t) j Ak2 . Let s = kxk2 /knk2 be the signal noise ratio.
Theorem 1 (Noisy Case Performance). There exist constants c01 , c02 which depend on the signal
noise ratio s and 1 , 2 > 0 which approximate zero when s ? ? or b ? ?, such that if the initial
(0)
solution wj in Algorithm 1 satisfies:
?b X
d
X
(0) T 2
(1 ? ?)b(1 ? 2 ) 1 0
2
w
(c (1 ? ) ? 1 ) ? 2 ,
j oi ?
c02 (1 ? ?o )
4 1
i=1 j=1
and
v
u
P?b Pd
(0) T
u 0
c02 i=1 j=1 (wj oi )2 (1 ? ?o )
t (c1 (1 ? ) + 1 )2 ? 42
1 0
(0)
H(w ) ? (c1 (1?2)?1 )?
?
,
2
4
(1 ? ?)b(1 ? 2 )
1
In the following section, we will provide a precise description of the required closeness.
4
then the performance of the solution from Algorithm 1 will be improved in each iteration, and eventually converges to:
lim H(w(t) )
t??
v
u
P?b Pd
(0) T
u 0
c02 i=1 j=1 (wj oi )2 (1 ? ?o )
t (c1 (1 ? 2) ? 1 )2 ? 42
1 0
? (c1 (1 ? 2) ? 1 ) +
?
.
2
4
(1 ? ?)b(1 ? 2 )
? 21 b? 21 s?1 ), decays as O(d
? 21 b? 21 ), and c0 = (s ? 1)2 /(s + 1)2 , c0 =
Here 1 and 2 decay as O(d
1
2
4
(1 + 1/s) .
Remark 1. From Theorem 1, we can observe followings:
1. When the outliers vanish, the second term in the squarep
root of performance H(w(t) ) is
(t)
0
zero. H(w ) will converge to (c1 (1 ? 2) ? 1 )/2 + (c01 (1 ? 2) ? 1 )2 ? 42 /2 <
c01 (1 ? 2) ? 1 < c01 < 1. Namely, the final performance is smaller than but approximates
1. Here c01 , 1 , 2 explain the affect of noise.
2. When s ? ?, the affect of noise is eliminated, 1 , 2 ? 0, c01 ? 1. H(w(t) ) converges
to 1 ? 2. Here depends on the ratio of intrinsic dimension over the sample size, and
accounts for the statistical bias due to performing PCA on a small portion of the data.
3. When the batch size increases to infinity, ? 0, H(w(t) ) converges to 1, meaning perfect
recovery.
To further investigate the behavior of the proposed online RPCA in presence of outliers, we consider
the following noiseless case. For the noiseless case, the signal noise ratio s ? ?, and thus c01 , c02 ?
1 and 1 , 2 ? 0. Then we can immediately obtain the performance bound of Algorithm 1 for the
noiseless case from Theorem 1.
Theorem 2 (Noiseless Case Performance). Suppose there is no noise. If the initial solution w(0) in
Algorithm 1 satisfies:
?b X
d
X
T
(1 ? ?)b
(w(0) j oi )2 ?
,
4(1 ? ?o )
i=1 j=1
and
v
u
P?b Pd
(0) T
u
(w
oi )2 (1 ? ?o )
t
1
1
j
i=1
j=1
H(w(0) ) ? ?
?
,
2
4
(1 ? ?)b
then the performance of the solution from Algorithm 1 will be improved in each updating and eventually converges to:
v
u
P?b Pd
(0) T
u
oi )2 (1 ? ?o )
1 t1
i=1
j=1 (wj
(t)
lim H(w ) ? +
?
.
t??
2
4
(1 ? ?)b
Remark 2. Observe from Theorem 2 the followings:
Pd
1. When the outliers are distributed on the groundtruth subspace, i.e., j=1 |wTj oi |2 = 1, the
P?b Pd
T
conditions become i=1 j=1 (w(0) oi )2 < ? and H(w(0) ) ? 0. Namely, for whatever
initial solution, the final performance will converge to 1.
Pd
2. When the outliers are orthogonal to the groundtruth subspace, i.e., j=1 |wTj oi |2 = 0,
P?b Pd
T
the conditions for the initial solution becomes i=1 j=1 |w(0) j oi |2 ? b(1 ? ?)/4, and
r
P?b Pd
(0) T
H0 ? 1/2 ? 1/4 ? i=1 j=1 (wj oi )2 /(1 ? ?)b. Hence, when the outlier fraction
? increases, the initial solution should be further away from outliers.
5
least 2
Pd
|wTj oi |2 < 1, the performance of online RPCA is improved by at
P?b Pd
(0) T
1/4 ? i=1 j=1 (wj oi )2 (1 ? ?o )/(1 ? ?)b from its initial solution. Hence,
3. When 0 <
r
j=1
when the initial solution is further away from the outliers, the outlier fraction is smaller, or
the outliers are closer to groundtruth subspace, the improvement is more significant. Moreover, observe that given a proper initial solution, even if ? = 0.5, the performance of online
RPCA still has a positive lower bound. Therefore, the breakdown point of online RPCA is
50%, the highest that any algorithm can achieve.
Discussion on the initial condition In Theorem 1 and Theorem 2, a mild condition is imposed on
the initial estimate. In practice, the initial estimate can be obtained by applying batch RPCA [6] or
HRPCA [24] on a small subset of the data. These batch methods are able to provide initial estimate
with performance guarantee, which may satisfy the initial condition.
5
Proof of The Results
We briefly explain the proof of Theorem 1: we first show that when the PCs estimation is being
improved, the variance of outliers along the PCs will keep decreasing. Then we demonstrate that
each PCs updating conducted by Algorithm 1 produces a better PCs estimation and decreases the
impact of outliers. Such improvement will continue until convergence, and the final performance
has bounded deviation from the optimum.
We provide here some concentration lemmas which are used in the proof of Theorem 1. The proof
of these lemmas is provided in the supplementary material. We first show that with high probability,
both the largest and smallest eigenvalues of the signals xi in the original space converge to 1. This
result is adopted from [24].
Lemma 1. There exists a constant c that only depends on ? and d, such that for all ? > 0 and b
signals {xi }bi=1 , the following holds with high probability:
b
1 X
sup
(wT xi )2 ? 1 ? ,
b
w?Sd
i=1
where = c?
q
d log3 b/b.
Next lemma is about the sampling process in the Algorithm 1 from step b) to step d). Though the
sampling process is without replacement and the sampled observations are not i.i.d., the following
lemma provides the concentration of the sampled observations.
t
Lemma 2 (Operator-Bernstein inequality [7]). Let {z0i }m
i=1 be a subset of Z = {zi }i=1 , which is
formed by randomly sampling without replacement from Z, as in Algorithm 1. Then the following
statement holds
!
m
m
X
X
T 0
T 0
w zi ? E
w zi ? ?
i=1
i=1
with probability larger than 1 ? 2 exp(?? 2 /4m).
Given the result in Lemma 1 , we obtain that the authentic samples concentration properties as stated
in the following lemma [24].
Lemma 3. If there exists such that
t
1 X
|wT xi |2 ? 1 ? ,
sup
t
w?Sd
i=1
6
and the observations zi are normalized by `2 -norm, then for any w1 , ? ? ? , wd ? Sp , the following
holds:
p
(1 ? )H(w) ? 2 (1 + )H(w)/s
(1/s + 1)2
p
t
d
(1 + )H(w) + 2 (1 + )H(w)/s + 1/s2
1 XX T 2
?
,
(w zi ) ?
t i=1 j=1 j
(1/s ? 1)2
where H(w) =
Pd
j=1
kwjT Ak2 and s is the signal noise ratio.
Based on Lemma 2 and Lemma 3, we obtain the following concentration results for the selected
observations in the Algorithm 1.
Lemma 4. If there exists such that
t
1 X
|wT xi |2 ? 1 ? ,
sup
t
w?Sd
i=1
d
and the observations {z0i }m
i=1 are sampled from {zi }i=1 as in Algorithm 1, then for any
w1 , . . . , wd ? Sp , with large probability, the following holds:
p
(1 ? )H(w) ? 2 (1 + )H(w)/s
??
(1/s + 1)2 b/m
p
t
d
(1 + )H(w) + 2 (1 + )H(w)/s + 1/s2
1 XX T 0 2
?
(w z ) ?
+ ?,
t i=1 j=1 j i
(1/s ? 1)2 b/m
Pd
where H(w) , j=1 kwjT Ak2 , s is the signal noise ratio and m is the number of sampled observations in each batch and ? > 0 is a small constant.
We denote the set of accepted authentic samples as Zt and the set of accepted outliers as Ot from the
t-th small batch. In the following lemma, we provide the estimation of number of accepted authentic
samples |Zt | and outliers |Ot |.
(t?1)
Lemma 5. For the current obtained principal components {wj
}dj=1 , the number of the accepted
authentic samples |Zt | and outliers |Ot | satisfy
d
?b X
d
X
X X
|Zt | 1 (1??)b
T
T
|O
|
1
t
(t?1)
(t?1)
2
2
(wj
oi ) ? ?
(wj
zi ) ? ? and
?
b ?b
b i=1 j=1
b
i=1 j=1
2
with probability at least 1 ? e?2? b . Here ? > 0 is a small constant, ? is the outlier fraction and b
is the size of the small batch.
From the above lemma, we can see that when the batch size b is sufficiently large, the above estimation for |Zt | and |Ot | holds with large probability. In the following lemma, we show that when the
algorithm improves the PCs estimation, the impact of outliers will be decreased accordingly.
Lemma 6. For an outlier oi , an arbitrary orthogonal basis {wj }dj=1 and the groundtruth basis
Pd
Pd
Pd
Pd
T
T
T
{wj }dj=1 which satisfy that j=1 wjT oi ?
j=1 wj oi and
j=1 wj wj ?
j=1 wj oi , the
P
Pd
d
value of j=1 wjT oi is a monotonically decreasing function of j=1 wTj wj .
Being equipped by the above lemmas, we can proceed to prove Theorem 1. The details of the proof
is deferred to the supplementary material due to the space limit.
6
Simulations
The numerical study is aimed to illustrate the performance of online robust PCA algorithm. We
follow the data generation method in [24] to randomly generate a p ? d matrix A and then scale its
7
leading singular value to s, which is the signal noise ratio. A ? fraction of outliers are generated.
Since it is hard to determine the most adversarial outlier distribution, in simulations, we generate
the outliers concentrate on several directions deviating from the groundtruth subspace. This makes a
rather adversarial case and is suitable for investigating the robustness of the proposed online RPCA
algorithm. In the simulations, in total T = 10, 000 samples are generated to form the sample sequence. For each parameter setting, we report the average result of 20 tests and standard deviation.
The initial solution is obtained by performing batch HRPCA [24] on the first batch. Simulation
results for p = 100, d = 1, s = 2 and outliers distributed on one direction are shown in Figure 1. It
takes around 0.5 seconds for the proposed online RPCA to process 10, 000 samples of 100 dimensional, on a PC with Quad CPU with 2.83GHz and RAM of 8GB. In contrast, HRPCA costs 237
seconds to achieve E.V. = 0.99. More simulation results for the d > 1 case are provided in the
supplementary material due to the space limit.
From the results, we can make the following observations. Firstly, online RPCA can improve the PC
estimation steadily. With more samples being revealed, the E.V. of the online RPCA outputs keep
increasing. Secondly, the performance of online RPCA is rather robust to outliers. For example, the
final result converges to E.V. ? 0.95 (HRPCA achieves 0.99) even with ? = 0.3 for relatively low
signal noise ratio s = 2 as shown in Figure 1. To more clearly demonstrate the robustness of online
RPCA to outliers, we implement the online PCA proposed in [23] as baseline for the ? = 2 case.
The results are presented in Figure 1, from which we can observe that the performance of online
PCA drops due to the sensitiveness to newly coming outliers. When the outlier fraction ? ? 0.1, the
online PCA cannot recover the true PC directions and the performance is as low as 0.
?= 0.01
?= 0.05
1
0.8
0.6
0.6
Online RPCA
Online PCA
0
0
10
30
20
# batches
40
Online RPCA
Online PCA
0.2
0
50
?= 0.10
1
0.4
0
10
30
20
# batches
40
0.4
Online RPCA
Online PCA
0.2
0
50
?= 0.15
1
E.V.
0.8
0.6
E.V.
0.8
0.6
0.4
0
10
30
20
# batches
40
0.4
0
50
0.6
Online RPCA
Online PCA
0
0
10
30
20
# batches
40
0.4
Online RPCA
Online PCA
0.2
50
0
0
10
30
20
# batches
40
E.V.
0.8
0.6
E.V.
0.8
0.6
E.V.
0.8
0.6
0.2
0.4
Online RPCA
Online PCA
0.2
0
50
0
10
30
20
# batches
0
10
40
40
50
0.4
Online RPCA
Online PCA
0.2
50
30
20
# batches
?= 0.30
1
0.8
0.4
Online RPCA
Online PCA
0.2
?= 0.20
1
?= 0.08
1
0.8
0.2
E.V.
?= 0.03
1
E.V.
E.V.
1
0
0
10
30
20
# batches
40
50
Figure 1: Performance comparison of online RPCA (blue line) with online PCA (red line). Here
s = 2, p = 100, T = 10, 000, d = 1. The outliers are distributed on a single direction.
7
Conclusions
In this work, we proposed an online robust PCA (online RPCA) algorithm for samples corrupted
by outliers. The online RPCA alternates between standard PCA for updating PCs and probabilistic
selection of the new samples which alleviates the impact of outliers. Theoretical analysis showed
that the online RPCA could improve the PC estimation steadily and provided results with bounded
deviation from the optimum. To the best of our knowledge, this is the first work to investigate such
online robust PCA problem with theoretical performance guarantee. The proposed online robust
PCA algorithm can be applied to handle challenges imposed by the modern big data analysis.
Acknowledgement
J. Feng and S. Yan are supported by the Singapore National Research Foundation under its International Research Centre @Singapore Funding Initiative and administered by the IDM Programme
Office. H. Xu is partially supported by the Ministry of Education of Singapore through AcRF Tier
Two grant R-265-000-443-112 and NUS startup grant R-265-000-384-133. S. Mannor is partially
supported by the Israel Science Foundation (under grant 920/12) and by the Intel Collaborative
Research Institute for Computational Intelligence (ICRI-CI).
8
References
[1] J.R. Bunch and C.P. Nielsen. Updating the singular value decomposition. Numerische Mathematik, 1978.
[2] E.J. Candes, X. Li, Y. Ma, and J. Wright.
Robust principal component analysis?
ArXiv:0912.3599, 2009.
[3] C. Croux and A. Ruiz-Gazen. A fast algorithm for robust principal components based on
projection pursuit. In COMPSTAT, 1996.
[4] C. Croux and A. Ruiz-Gazen. High breakdown estimators for principal components: the
projection-pursuit approach revisited. Journal of Multivariate Analysis, 2005.
[5] A. d?Aspremont, F. Bach, and L. Ghaoui. Optimal solutions for sparse principal component
analysis. JMLR, 2008.
[6] J. Feng, H. Xu, and S. Yan. Robust PCA in high-dimension: A deterministic approach. In
ICML, 2012.
[7] David Gross and Vincent Nesme. Note on sampling without replacing from a finite collection
of matrices. arXiv preprint arXiv:1001.2738, 2010.
[8] P. Hall, D. Marshall, and R. Martin. Merging and splitting eigenspace models. TPAMI, 2000.
[9] Jun He, Laura Balzano, and John Lui. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011.
[10] P. Honeine. Online kernel principal component analysis: a reduced-order model. TPAMI,
2012.
[11] P.J. Huber, E. Ronchetti, and MyiLibrary. Robust statistics. John Wiley & Sons, New York,
1981.
[12] M. Hubert, P.J. Rousseeuw, and K.V. Branden. Robpca: a new approach to robust principal
component analysis. Technometrics, 2005.
[13] M. Hubert, P.J. Rousseeuw, and S. Verboven. A fast method for robust principal components
with applications to chemometrics. Chemometrics and Intelligent Laboratory Systems, 2002.
[14] G. Li and Z. Chen. Projection-pursuit approach to robust dispersion matrices and principal
components: primary theory and monte carlo. Journal of the American Statistical Association,
1985.
[15] Y. Li. On incremental and robust subspace learning. Pattern recognition, 2004.
[16] Michael W Mahoney. Randomized algorithms for matrices and data. arXiv preprint
arXiv:1104.5557, 2011.
[17] R.A. Maronna. Robust m-estimators of multivariate location and scatter. The annals of statistics, 1976.
[18] S. Ozawa, S. Pang, and N. Kasabov. A modified incremental principal component analysis for
on-line learning of feature space and classifier. PRICAI, 2004.
[19] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical
Magazine, 1901.
[20] C. Qiu, N. Vaswani, and L. Hogben. Recursive robust pca or recursive sparse recovery in large
but structured noise. arXiv preprint arXiv:1211.3754, 2012.
[21] P.J. Rousseeuw. Least median of squares regression. Journal of the American statistical association, 1984.
[22] P.J. Rousseeuw and A.M. Leroy. Robust regression and outlier detection. John Wiley & Sons
Inc, 1987.
[23] M.K. Warmuth and D. Kuzmin. Randomized online pca algorithms with regret bounds that are
logarithmic in the dimension. JMLR, 2008.
[24] H. Xu, C. Caramanis, and S. Mannor. Principal component analysis with contaminated data:
The high dimensional case. In COLT, 2010.
[25] H. Zhao, P.C. Yuen, and J.T. Kwok. A novel incremental principal component analysis and its
application for face recognition. TSMC-B, 2006.
9
| 5135 |@word mild:3 determinant:1 briefly:1 norm:2 c0:2 shuicheng:1 simulation:5 covariance:7 decomposition:2 ronchetti:1 minus:1 initial:19 mpexuh:1 contains:1 ours:2 current:5 wd:2 scatter:1 must:1 john:3 distant:1 partition:3 numerical:1 remove:1 drop:1 update:8 intelligence:1 selected:1 warmuth:1 accordingly:3 plane:1 nent:1 accepting:2 compo:1 provides:2 mannor:3 contribute:1 revisited:1 location:1 firstly:1 along:3 become:3 initiative:1 consists:1 prove:1 manner:1 huber:1 indeed:1 expected:1 behavior:1 roughly:1 decreasing:2 cpu:1 quad:1 equipped:1 increasing:1 becomes:1 provided:4 xx:2 moreover:4 underlying:1 bounded:2 eigenspace:1 israel:1 pursue:1 eigenvector:1 developed:1 compressive:2 c01:7 impractical:1 guarantee:6 remember:1 every:3 classifier:1 whatever:1 grant:3 t1:1 positive:1 aat:1 sd:3 limit:2 severely:2 id:1 initialization:1 ease:1 vaswani:1 bi:2 practical:1 recursive:3 regret:2 union:1 practice:1 implement:1 procedure:2 yan:3 reject:1 projection:3 pre:1 cannot:2 selection:2 operator:1 storage:4 influence:1 applying:1 imposed:2 deterministic:1 yt:5 compstat:1 focused:1 numerische:1 recovery:3 immediately:1 splitting:1 hogben:1 estimator:8 spanned:1 handle:3 updated:1 annals:1 suppose:1 magazine:1 expensive:2 recognition:2 updating:5 breakdown:5 gorithm:1 observed:1 preprint:4 calculate:1 wj:34 hinders:1 decrease:1 highest:1 gross:1 intuition:1 pd:23 complexity:1 depend:1 solving:1 wtj:6 upon:3 basis:3 caramanis:1 fast:2 monte:1 startup:1 pearson:1 h0:1 quite:2 whose:2 larger:2 supplementary:4 balzano:1 ability:1 statistic:4 noisy:2 final:6 online:80 ip:1 advantage:1 sequence:5 eigenvalue:1 tpami:2 propose:4 reconstruction:1 maximal:2 coming:2 product:1 alleviates:2 achieve:3 description:1 moved:1 normalize:1 scalability:1 chemometrics:2 convergence:3 optimum:3 requirement:1 produce:1 incremental:8 perfect:2 converges:5 illustrate:1 ac:1 measured:2 b0:3 solves:1 involves:2 come:1 direction:8 concentrate:1 material:4 education:1 yuen:1 secondly:1 exploring:1 hold:5 sufficiently:1 around:1 wright:1 normal:1 exp:2 hall:1 mapping:1 achieves:2 smallest:1 estimation:24 rpca:40 awj:1 sensitive:2 largest:1 weighted:1 clearly:1 always:1 aim:2 modified:1 rather:2 office:1 ax:1 focus:1 improvement:2 eleyans:1 contrast:4 adversarial:2 baseline:1 sense:1 typically:2 bt:2 accept:1 interested:1 issue:1 aforementioned:1 colt:1 denoted:2 proposes:1 initialize:1 ak2:3 equal:1 construct:1 having:2 eliminated:1 sampling:4 identical:1 represents:1 kw:1 icml:1 contaminated:4 others:1 report:1 intelligent:1 few:1 employ:1 modern:1 randomly:2 national:4 deviating:1 replacement:2 maintain:2 attempt:1 technometrics:1 detection:1 investigate:3 evaluation:1 deferred:1 mahoney:1 mixture:1 pc:34 light:1 hubert:2 closer:2 partial:1 necessary:1 huan:1 orthogonal:2 re:5 theoretical:4 instance:6 marshall:1 cost:1 deviation:4 subset:4 technion:2 jiashi:2 conducted:1 pal:1 stored:1 corrupted:7 chooses:1 international:1 randomized:4 sequel:1 probabilistic:4 michael:1 w1:2 containing:1 laura:1 zhao:1 american:2 leading:2 return:1 stark:1 li:3 account:1 b2:1 inc:1 satisfy:3 depends:2 stream:1 performed:1 root:1 sup:3 portion:2 red:1 recover:1 mcd:1 candes:1 collaborative:1 minimize:1 formed:1 ni:2 pang:1 il:1 square:2 accuracy:2 efficiently:1 variance:6 characteristic:1 branden:1 vincent:1 none:2 carlo:1 bunch:1 notoriously:1 explain:3 cumbersome:1 definition:2 steadily:4 proof:6 sampled:5 newly:5 dataset:1 knowledge:2 knk2:1 lim:2 improves:1 nielsen:1 actually:1 follow:1 improved:4 though:1 generality:1 until:3 replacing:1 incrementally:1 acrf:1 mode:4 quality:1 icri:1 normalized:4 y2:2 true:5 unbiased:1 equality:1 hence:2 stance:1 spherically:1 iteratively:1 symmetric:1 laboratory:1 deal:1 skewed:1 maintained:1 grasta:1 demonstrate:2 dedicated:1 meaning:1 novel:2 recently:4 funding:1 extend:1 he:2 tail:1 approximates:1 association:2 significant:2 rd:1 centre:1 dj:8 multivariate:2 closest:1 showed:1 apart:1 certain:1 buffer:1 inequality:1 arbitrarily:2 continue:1 yi:11 minimum:1 ministry:1 impose:1 converge:3 maximize:1 determine:1 monotonically:1 signal:13 calculation:1 bach:1 manipulate:1 impact:3 regression:2 essentially:1 metric:1 noiseless:4 arxiv:9 iteration:1 kernel:1 achieved:3 c1:5 decreased:1 singular:2 median:1 ot:4 tsmc:1 shie:2 ee:2 structural:1 presence:1 revealed:10 bernstein:1 enough:1 independence:1 affect:3 zi:10 fit:1 inner:1 administered:1 pca:62 gb:1 suffer:1 proceed:1 york:1 remark:2 generally:1 clear:1 eigenvectors:1 tune:1 aimed:1 amount:1 rousseeuw:4 maronna:1 reduced:1 generate:2 exist:2 singapore:6 notice:1 estimated:2 kwjt:2 blue:1 group:1 authentic:15 kyi:1 ram:1 fraction:9 named:1 throughout:1 c02:5 groundtruth:5 patch:1 acceptable:1 scaling:1 bound:4 ct:1 quadratic:3 croux:2 leroy:1 gazen:2 infinity:1 pricai:1 robustifying:1 performing:3 relatively:1 martin:1 department:4 structured:1 according:1 alternate:1 smaller:2 son:2 partitioned:1 outlier:66 gradually:1 pr:1 ghaoui:1 tier:1 computationally:1 previously:1 mathematik:1 eventually:2 fail:2 end:2 adopted:2 pursuit:3 observe:4 kwok:1 away:2 robustly:1 batch:32 robustness:5 eigen:1 rp:4 original:1 denotes:1 include:1 sensitiveness:3 calculating:1 feng:3 objective:1 concentration:4 primary:1 traditional:1 gradient:1 subspace:12 grassmannian:1 me:1 manifold:1 idm:1 modeled:1 illustration:1 mini:1 minimizing:1 ratio:8 setup:1 statement:1 expense:1 stated:1 implementation:1 proper:2 zt:5 unknown:1 perform:3 observation:17 dispersion:1 datasets:1 finite:2 descent:1 inevitably:1 precise:1 y1:4 arbitrary:2 david:1 namely:5 required:1 philosophical:1 nu:4 address:2 able:3 proceeds:1 pattern:1 challenge:1 nesme:1 memory:2 suitable:1 endows:2 hr:1 residual:1 improve:4 conventionally:1 aspremont:1 jun:1 faced:1 deviate:1 sg:3 literature:1 removal:3 acknowledgement:1 loss:2 mixed:2 generation:1 foundation:2 integrate:1 storing:1 supported:3 bias:1 institute:1 face:2 taking:1 sparse:4 distributed:3 ghz:1 overcome:1 axi:1 dimension:3 computes:1 author:1 qualitatively:1 commonly:1 projected:1 collection:1 programme:1 log3:1 approximate:2 keep:2 dealing:1 sequentially:5 investigating:1 b1:2 xi:6 robustifies:1 continuous:1 iterative:1 robust:37 obtaining:1 inherit:1 sp:2 main:1 whole:1 noise:12 s2:2 big:1 qiu:1 repeated:1 categorized:1 xu:4 kuzmin:1 intel:1 borel:1 fashion:1 wiley:2 oi:23 comprises:1 kxk2:1 vanish:1 jmlr:2 ruiz:2 theorem:11 bad:1 sensing:2 decay:2 closeness:1 intrinsic:1 exists:3 merging:1 cumulated:1 ci:1 chen:1 rejection:2 cx:1 admitted:1 logarithmic:1 explore:3 univariate:1 likely:1 kxk:1 expressed:2 tracking:1 partially:2 aa:1 satisfies:2 relies:1 ma:1 goal:1 towards:1 wjt:2 hard:1 specifically:1 infinite:1 uniformly:2 lui:1 wt:3 exhausting:1 principal:20 degradation:1 total:3 called:1 lemma:19 ece:2 accepted:7 latter:1 z0i:2 absolutely:1 handling:1 |
4,571 | 5,136 | Fantope Projection and Selection:
A near-optimal convex relaxation of sparse PCA
Vincent Q. Vu
The Ohio State University
[email protected]
Juhee Cho
University of Wisconsin, Madison
[email protected]
Jing Lei
Carnegie Mellon University
[email protected]
Karl Rohe
University of Wisconsin, Madison
[email protected]
Abstract
We propose a novel convex relaxation of sparse principal subspace estimation
based on the convex hull of rank-d projection matrices (the Fantope). The convex
problem can be solved efficiently using alternating direction method of multipliers (ADMM). We establish a near-optimal convergence rate, in terms of the sparsity, ambient dimension, and sample size, for estimation of the principal subspace
of a general covariance matrix without assuming the spiked covariance model.
In the special case of d = 1, our result implies the near-optimality of DSPCA
(d?Aspremont et al. [1]) even when the solution is not rank 1. We also provide a
general theoretical framework for analyzing the statistical properties of the method
for arbitrary input matrices that extends the applicability and provable guarantees
to a wide array of settings. We demonstrate this with an application to Kendall?s
tau correlation matrices and transelliptical component analysis.
1
Introduction
Principal components analysis (PCA) is a popular technique for unsupervised dimension reduction
that has a wide range of application?science, engineering, and any place where multivariate data
is abundant. PCA uses the eigenvectors of the sample covariance matrix to compute the linear
combinations of variables with the largest variance. These principal directions of variation explain
the covariation of the variables and can be exploited for dimension reduction. In contemporary
applications where variables are plentiful (large p) but samples are relatively scarce (small n), PCA
suffers from two major weaknesses : 1) the interpretability and subsequent use of the principal
directions is hindered by their dependence on all of the variables; 2) it is generally inconsistent in
high-dimensions, i.e. the estimated principal directions can be noisy and unreliable [see 2, and the
references therein].
Over the past decade, there has been a fever of activity to address the drawbacks of PCA with a class
of techniques called sparse PCA that combine the essence of PCA with the assumption that the phenomena of interest depend mostly on a few variables. Examples include algorithmic [e.g., 1, 3?10]
and theoretical [e.g., 11?14] developments. However, much of this work has focused on the first
principal component. One rationale behind this focus is by analogy with ordinary PCA: additional
components can be found by iteratively deflating the input matrix to account for variation uncovered
by previous components. However, the use of deflation with sparse PCA entails complications of
non-orthogonality, sub-optimality, and multiple tuning parameters [15]. Identifiability and consistency present more subtle issues. The principal directions of variation correspond to eigenvectors
of some population matrix ?. There is no reason to assume a priori that the d largest eigenvalues
1
of ? are distinct. Even if the eigenvalues are distinct, estimates of individual eigenvectors can be
unreliable if the gap between their eigenvalues is small. So it seems reasonable, if not necessary,
to de-emphasize eigenvectors and to instead focus on their span, i.e. the principal subspace of
variation.
There has been relatively little work on the problem of estimating the principal subspace or even
multiple eigenvectors simultaneously. Most works that do are limited to iterative deflation schemes
or optimization problems whose global solution is intractable to compute. Sole exceptions are the
diagonal thresholding method [2], which is just ordinary PCA applied to the subset of variables
with largest marginal sample variance, or refinements such as iterative thresholding [16], which use
diagonal thresholding as an initial estimate. These works are limited, because they cannot be used
when the variables have equal variances (e.g., correlation matrices). Theoretical results are equally
limited in their applicability. Although the optimal minimax rates for the sparse principal subspace
problem are known in both the spiked [17] and general [18] covariance models, existing statistical
guarantees only hold under the restrictive spiked covariance model, which essentially guarantees that
diagonal thresholding has good properties, or for estimators that are computationally intractable.
In this paper, we propose a novel convex optimization problem to estimate the d-dimensional principal subspace of a population matrix ? based on a noisy input matrix S. We show that if S is a sample
covariance matrix and the projection ? of the d-dimensional principal subspace of ? depends only
on s variables, then with a suitable choice of regularization parameter, the Frobenius norm of the
! is bounded with high probability
error of our estimator X
#
"
$
! ? ?||| = O (?1 /?)s log p/n
|||X
2
where ?1 is the largest eigenvalue of ? and ? the gap between the dth and (d + 1)th largest eigenvalues of ?. This rate turns out to be nearly minimax optimal (Corollary 3.3), and under additional
assumptions on signal strength, it also allows us to recover the support of the principal subspace
(Theorem 3.2). Moreover, we provide easy to verify conditions (Theorem 3.3) that yield nearoptimal statistical guarantees for other choices of input matrix, such as Pearson?s correlation and
Kendall?s tau correlation matrices (Corollary 3.4).
Our estimator turns out to be a semidefinite program (SDP) that generalizes the DSPCA approach
of [1] to d ? 1 dimensions. It is based on a convex body, called the Fantope, that provides a tight
relaxation for simultaneous rank and orthogonality constraints on the positive semidefinite cone.
Solving the SDP is non-trivial. We find that an alternating direction method of multipliers (ADMM)
algorithm [e.g., 19] can efficiently compute its global optimum (Section 4).
In summary, the main contributions of this paper are as follows.
1. We formulate the sparse principal subspace problem as a novel semidefinite program with
a Fantope constraint (Section 2).
2. We show that the proposed estimator achieves a near optimal rate of convergence in subspace estimation without assumptions on the rank of the solution or restrictive spiked covariance models. This is a first for both d = 1 and d > 1 (Section 3).
3. We provide a general theoretical framework that accommodates other matrices, in addition
to sample covariance, such as Pearson?s correlation and Kendall?s tau.
4. We develop an efficient ADMM algorithm to solve the SDP (Section 4), and provide numerical examples that demonstrate the superiority of our approach over deflation methods
in both computational and statistical efficiency (Section 5).
The remainder of the paper explains each of these contributions in detail, but we defer all proofs to
Appendix A.
Related work Existing work most closely related to ours is the DSPCA approach for single component sparse PCA that was first proposed in [1]. Subsequently, there has been theoretical analysis
under a spiked covariance model and restrictions on the entries of the eigenvectors [11], and algorithmic developments including block coordinate ascent [9] and ADMM [20]. The crucial difference
with our work is that this previous work only considered d = 1. The d > 1 case requires invention
and novel techniques to deal with a convex body, the Fantope, that has never before been used in
sparse PCA.
2
Notation For matrices A, B of compatible dimension ?A, B? := tr(AT B) is the Frobenius inner
2
product, and |||A|||2 := ?A, A? is the squared Frobenius norm. ?x?q is the usual ?q norm with ?x?0
defined as the number of nonzero entries of x. ?A?a,b is the (a, b)-norm defined to be the ?b norm
of the vector of rowwise ?a norms of A, e.g. ?A?1,? is the maximum absolute row sum. For a
symmetric matrix A, we define ?1 (A) ? ?2 (A) ? ? ? ? to be the eigenvalues of A with multiplicity.
When the context is obvious we write ?j := ?j (A) as shorthand. For two subspaces M1 and M2 ,
sin ?(M1 , M2 ) is defined to be the matrix whose diagonals are the sines of the canonical angles
between the two subspaces [see 21, ?VII].
2
Sparse subspace estimation
! that is
Given a symmetric input matrix S, we propose a sparse principal subspace estimator X
defined to be a solution of the semidefinite program
maximize ?S, X? ? ??X?1,1
X ? F d,
subject to
in the variable X, where
(1)
%
&
F d := X : 0 ? X ? I and tr(X) = d
is a convex body called the Fantope [22, ?2.3.2], and ? ? 0 is a regularization parameter that
encourages sparsity. When d = 1, the spectral norm bound in F d is redundant and (1) reduces to
the DSPCA approach of [1]. The motivation behind (1) is based on two key insights.
The first insight is a variational characterization of the principal subspace of a symmetric matrix.
The sum of the d largest eigenvalues of a symmetric matrix A can be expressed as
d
'
i=1
(a)
?i (A) =
(b)
max ?A, V V T ? = max ?A, X? .
V T V =Id
X?F d
(2)
Identity (a) is an extremal property known as Ky Fan?s maximum principle [23]; (b) is based on the
less well known observation that
F d = conv({V V T : V T V = Id }) ,
i.e. the extremal points of F d are the rank-d projection matrices. See [24] for proofs of both.
The second insight is a connection between the (1, 1)-norm and a notion of subspace sparsity introduced by [18]. Any X ? 0 can be factorized (non-uniquely) as X = V V T .
2
2
Lemma 2.1. If X = V V T , then ?X?1,1 ? ?V ?2,1 ? ?V ?2,0 tr(X).
Consequently, any X ? F d that has at most s non-zero rows necessarily has ?X?1,1 ? s2 d. Thus,
?X?1,1 is a convex relaxation of what [18] call row sparsity for subspaces.
These two insights reveal that (1) is a semidefinite relaxation of the non-convex problem
maximize
subject to
2
?S, V V T ? ? ??V ?2,0 d
V T V = Id .
[18] proposed solving an equivalent form of the above optimization problem and showed that the
estimator corresponding to its global solution is minimax rate optimal under a general statistical
model for S. Their estimator requires solving an NP-hard problem. The advantage of (1) is that it is
computationally tractable.
! ? F d guarantees that its rank is ? d. However X
! need not
Subspace estimation The constraint X
d
be an extremal point of F , i.e. a rank-d projection matrix. In order to obtain a proper d-dimensional
! say V! , and form the projection
subspace estimate, we can extract the d leading eigenvectors of X,
! = V! V! T . The projection is unique, but the choice of basis is arbitrary. We can follow the
matrix ?
convention of standard PCA by choosing an orthogonal matrix O so that (V! O)T S(V! O) is diagonal,
and take V! O as the orthonormal basis for the subspace estimate.
3
3
Theory
!
In this section we describe our theoretical framework for studying the statistical properties of X
given by (1) with arbitrary input matrices that satisfy the following assumptions.
Assumption 1 (Symmetry). S and ? are p ? p symmetric matrices.
Assumption 2 (Identifiability). ? = ?(?) = ?d (?) ? ?d+1 (?) > 0.
Assumption 3 (Sparsity). The projection ? onto the subspace spanned by the eigenvectors of ?
corresponding to its d largest eigenvalues satisfies ???2,0 ? s, or equivalently, ?diag(?)?0 ? s.
The key result (Theorem 3.1 below) implies that the statistical properties of the error of the estimator
! ? ?,
? := X
can be derived, in many cases, by routine analysis of the entrywise errors of the input matrix
W := S ? ? .
! The first is relating the difference in the values of
There are two main ideas in our analysis of X.
! to ?. The second is exploiting the decomposability of
the objective function in (1) at ? and X
the regularizer. Conceptually, this is the same approach taken by [25] in analyzing the statistical
properties of regularized M -estimators. It is worth noting that the curvature result in our problem
comes from the geometry of the constraint set in (1). It is different from the ?restricted strong
convexity? in [25], a notion of curvature tailored for regularization in the form of penalizing an
unconstrained convex objective.
3.1
Variational analysis on the Fantope
The first step of our analysis is to establish a bound on the curvature of the objective function along
the Fantope and away from the truth.
Lemma 3.1 (Curvature). Let A be a symmetric matrix and E be the projection onto the subspace
spanned by the eigenvectors of A corresponding to its d largest eigenvalues ?1 ? ?2 ? ? ? ? . If
?A = ?d ? ?d+1 > 0, then
?A
2
|||E ? F |||2 ? ?A, E ? F ?
2
for all F satisfying 0 ? F ? I and tr(F ) = d.
A version of Lemma 3.1 first appeared in [18] with the additional restriction that F is a projection
matrix. Our proof of the above extension is a minor modification of their proof.
The following is an immediate corollary of Lemma 3.1 and the Ky Fan maximal principle.
Corollary 3.1 (A sin ? theorem [18]). Let A,B be symmetric matrices and MA , MB be their
respective d-dimensional principal subspaces. If ?A,B = [?d+1 (A) ? ?d (A)] ? [?d+1 (B) ? ?d (B)],
then
?
2
|||sin ?(MA , MB )|||2 ?
|||A ? B|||2 .
?A,B
The advantage of Corollary 3.1 over the Davis-Kahan Theorem [see, e.g., 21, ?VII.3] is that it does
not require a bound on the differences between eigenvalues of A and eigenvalues of B. This means
that typical applications of the Davis-Kahan Theorem require the additional invocation of Weyl?s
! ?= d, its principal subspace
Theorem. Our primary use of this result is to show that even if rank(X)
will be close to that of ? if ? is small.
(
Corollary 3.2 (Subspace error bound). If M is the principal d-dimensional subspace of ? and M
!
is the principal d-dimensional subspace of X, then
(
|||sin ?(M, M)|||
2 ?
4
?
2|||?|||2 .
3.2
Deterministic error
With Lemma 3.1, it is straightforward to prove the following theorem.
Theorem 3.1 (Deterministic error bound). If ? ? ?W ??,? and s ? ???2,0 then
|||?|||2 ? 4s?/? .
! of (1). It does not assume that the solution is rank-d as
Theorem 3.1 holds for any global optimizer X
in [11]. The next theorem gives a sufficient condition for support recovery by diagonal thresholding
!
X.
Theorem 3.2 (Support recovery). For all t > 0
2
)
) )
)
!jj ? t}) + ){j : ?jj ? 2t, X
!jj < t}) ? |||?|||2 .
){j : ?jj = 0, X
t2
%
&
!
!
As a consequence, the variable selection procedure J(t) := j : Xjj ? t succeeds if
minj:?jj ?=0 ?jj ? 2t > 2|||?|||2 .
3.3
Statistical properties
! in a generic setting where
In this section we use Theorem 3.1 to derive the statistical properties of X
the entries of W uniformly obey a restricted sub-Gaussian deviation inequality. This is not the most
! for two different
general result possible, but it allows us to illustrate the statistical properties of X
types of input matrices: sample covariance and Kendall?s tau correlation. The former is the standard
input for PCA; the latter has recently been shown to be a useful robust and nonparametric tool for
high-dimensional graphical models [26].
Theorem 3.3 (General statistical error bound). If there exists ? > 0 and n > 0 such that ? and S
satisfy
"
$
"
$
max P |Sij ? ?ij | ? t ? 2 exp ? 4nt2 /? 2
(3)
ij
for all t ? ? and
then
with probability at least 1 ? 2/p2 .
?=?
#
log p/n ? ? ,
! ? ?||| ?
|||X
2
(4)
4? #
s log p/n
?
Sample covariance Consider the setting where the input matrix is the sample covariance matrix
of a random sample of size n > 1 from a sub-Gaussian distribution. A random vector Y with
? = Var(Y ) has sub-Gaussian distribution if there exists a constant L > 0 such that
"
$
"
2$
P |?Y ? EY, u?| ? t ? exp ? Lt2 /??1/2 u?2
(5)
for all u and t ? 0. Under this condition we have the following corollary of Theorem 3.3.
Corollary 3.3. Let S be the sample covariance matrix of an i.i.d. sample of size n > 1 from a
sub-Gaussian distribution (5) with population covariance matrix ?. If ? is chosen to satisfy (4) with
? = c?1 , then
#
! ? ?||| ? C ?1 s log p/n
|||X
2
?
with probablity at least 1 ? 2/p2 , where c, C are constants depending only on L.
Comparing with the minimax lower bounds derived in [17, 18], we see that the rate in Corollary 3.3
is roughly larger than the optimal minimax rate by a factor of
#
#
?1 /?d+1 ? s/d
The first term only becomes important in the near-degenerate case where ?d+1 ? ?1 . It is possible
with much more technical work to get sharp dependence on the eigenvalues, but we prefer to retain
brevity and clarity in our proof of the version here. The second term is likely to be unimprovable
without additional conditions on S and ? such as a spiked covariance model. Very recently, [14]
showed
in a testing framework with similar assumptions as ours when d = 1 that the extra factor
?
s is necessary for any polynomial time procedure if the planted clique problem cannot be solved
in randomized polynomial time.
5
Kendall?s tau Kendall?s tau correlation provides a robust and nonparametric alternative to ordinary (Pearson) correlation. Given an n ? p matrix whose rows are i.i.d. p-variate random vectors,
the theoretical and empirical versions of Kendall?s tau correlation matrix are
"
$
?ij := Cor sign(Y1i ? Y2i ) , sign(Y1j ? Y2j )
'
2
??ij :=
sign(Ysi ? Yti ) sign(Ysj ? Ytj ) .
n(n ? 1) s<t
A key feature of Kendall?s tau is that it is invariant under strictly monotone transformations, i.e.
sign(Ysi ? Yti ) sign(Ysj ? Ytj )) = sign(fi (Ysi ) ? fi (Yti )) sign(fj (Ysj ) ? fj (Ytj )) ,
where fi , fj are strictly monotone transformations. When Y is multivariate Gaussian, there is also
a one-to-one correspondence between ?ij and ?ij = Cor(Y1i , Y1j ) [27] :
2
arcsin(?ij ) .
?
These two observations led [26] to propose using
* "? $
sin 2 ??ij
if i ?= j
T!ij =
1
if i = j .
?ij =
(6)
(7)
as an input matrix to Gaussian graphical model estimators in order to extend the applicability of those
procedures to the wider class of nonparanormal distributions [28]. This same idea was extended to
sparse PCA by [29]; they proposed and analyzed using T! as an input matrix to the non-convex
sparse PCA procedure of [13]. A shortcoming of that approach is that their theoretical guarantees
only hold for the global solution of an NP-hard optimization problem. The following corollary of
! with Kendall?s tau is nearly optimal.
Theorem 3.3 rectifies the situation by showing that X
Corollary 3.4. Let S = T! as defined in (7) for an i.i.d. sample of size n > 1 and?let ? = T be the
analogous quantity with ?ij in place of ??ij . If ? is chosen to satisfy (4) with ? = 8?, then
?
8 2? #
!
|||X ? ?|||2 ?
s log p/n
?
with probablity at least 1 ? 2/p2 .
Note that Corollary 3.4 only requires that ?? be computed from an i.i.d. sample. It does not specify
the marginal distribution of the observations. So ? = T is not necessarily positive semidefinite
and may be difficult to interpret. However, under additional conditions, the following lemma gives
meaning to T by extending (6) to a wide class of distributions, called transelliptical by [29], that
includes the nonparanormal. See [29, 30] for further information.
Lemma ([29, 30]). If (Y11 , . . . , Y1p ) has continuous distribution and there exist monotone transformations f1 , . . . , fp such that
"
$
f1 (Y11 ), . . . , fp (Y1p )
? then
has elliptical distribution with scatter matrix ?,
+
?
? ii ?
? jj .
Tij = ?ij / ?
"
$
Moreover, if fj (Y1j ), j = 1, . . . , p have finite variance, then Tij = Cor fi (Y1i ), fj (Y1j ) .
This lemma together with Corollary 3.4 shows that Kendall?s tau can be used in place of the sample
correlation matrix for a wide class of distributions without much loss of efficiency.
4
An ADMM algorithm
The chief difficulty in directly solving (1) is the interaction between the penalty and the Fantope
constraint. Without either of these features, the optimization problem would be much easier. ADMM
can exploit this fact if we first rewrite (1) as the equivalent equality constrained problem
minimize
subject to
? ? 1F d (X) ? ?S, X? + ??Y ?1,1
X ? Y = 0,
6
(8)
Algorithm 1 Fantope Projection and Selection (FPS)
Require: S = S T , d ? 1, ? ? 0, ? > 0, ? > 0
Y (0) ? 0, U (0) ? 0
repeat t = 0, 1, 2,"3, . . .
$
X (t+1) ? PF d Y (t) ? U (t) + S/?
"
$
Y (t+1) ? S?/? X (t+1) + U (t)
U (t+1) ? U (t) + X (t+1) ? Y (t+1)
2
2
until max(|||X (t) ? Y (t) |||2 , ?2 |||Y (t) ? Y (t?1) |||2 ) ? d?2
return Y (t)
? Initialization
? Fantope projection
? Elementwise soft thresholding
? Dual variable update
? Stopping criterion
in the variables X and Y , where 1F d is the 0-1 indicator function for F d and we adopt the convention
? ? 0 = 0. The augmented Lagrangian associated with (8) has the form
?,
2
2
L? (X, Y, U ) := ? ? 1F d (X) ? ?S, X? + ??Y ?1,1 +
|||X ? Y + U |||2 ? |||U |||2 , (9)
2
where U = (1/?)Z is the scaled ADMM dual variable and ? is the ADMM penalty parameter [see
19, ?3.1]. ADMM consists of iteratively minimizing L? with respect to X, minimizing L? with
respect to Y , and then updating the dual variable. Algorithm 1 summarizes the main steps.
In light of the separation of X and Y in (9) and some algebraic manipulation, the X and Y updates
reduce to computing the proximal operators
"
$
1
2
PF d Y ? U + S/? := arg min |||X ? (Y ? U + S/?)|||2
2
d
X?F
?
1
2
S?/? (X + U ) := arg min ?Y ?1,1 + |||(X + U ) ? Y |||2 .
?
2
Y
S?/? is the elementwise soft thresholding operator [e.g., 19, ?4.4.3] defined as
S?/? (x) = sign(x) max(|x| ? ?/?, 0) .
PF d is the Euclidean projection onto F d and is given in closed form in the following lemma.
.
T
Lemma 4.1 (Fantope projection). If X =
i ?i ui ui is a spectral decomposition of X, then
. +
+
T
PF d (X) = i ?i (?)ui ui , where ?i (?) = min(max(?i ? ?, 0), 1) and ? satisfies the equation
. +
i ?i (?) = d.
Thus, PF d (X) involves computing an eigendecomposition of Y , and then modifying the eigenvalues
by solving a monotone, piecewise linear equation.
Rather than fix the ADMM penalty parameter ? in Algorithm 1 at some constant value, we recommend using the varying penalty scheme described in [19, ?3.4.1] that dynamically updates ? after
each iteration of the ADMM to keep the primal and dual residual norms (the two sum of squares
in the stopping criterion of Algorithm 1) within a constant factor of each other. This eliminates an
additional tuning parameter, and in our experience, yields faster convergence.
5
Simulation results
We conducted a simulation study to compare the effectiveness of FPS against three deflation-based
methods: DSPCA (which is just FPS with d = 1), GPower?1 [7], and SPC [5, 6]. These methods
obtain multiple component estimates by taking the kth component estimate v?k from input matrix Sk ,
and then re-running the method with the deflated input matrix: Sk+1 = (I ? v?k v?kT )Sk (I ? v?k v?kT ).
The resulting d-dimensional principal subspace estimate is the span of v?1 , . . . , v?d . Tuning parameter
selection can be much more complicated for these iterative deflation methods. In our simulations,
we simply fixed the regularization parameter to be the same for all d components.
We generated input matrices by sampling n = 100, i.i.d. observations from a Np (0, ?), p = 200
distribution and taking S to be the usual sample covariance matrix. We considered two different
types of sparse ? = V V T of rank d = 5: those with disjoint support for the nonzero entries of the
7
s:10, noise:1
s:10, noise:10
s:25, noise:1
s:25, noise:10
support:disjoint
1
0
?2
?3
1
support:shared
log(MSE)
?1
0
?1
5
10
20
30 5
10
20
30 5
10
20
30 5
10
20
30
(2,1)?norm of estimate
Figure 1: Mean squared error of FPS (
), DSPCA with deflation (
), GPower?1 (
), and
SPC (
) across 100 replicates each of a variety of simulation designs with n = 100, p = 200,
d = 5, s ? {10, 25}, noise ? 2 ? {1, 10}.
columns of V and those with shared support. We generated V by sampling its nonzero entries from
a standard Gaussian distribution and then orthnormalizing V while retaining the desired sparsity
pattern. In both cases, the number of nonzero rows of V is equal to s ? {10, 25}. We then embedded
? inside the population covariance matrix ? = ?? + (I ? ?)?0 (I ? ?), where ?0 is a Wishart
matrix with p degrees of freedom
and ? > 0 is chosen so that the effective noise level (in the optimal
#
minimax rate [18]), ? 2 = ?1 ?d+1 /(?d ? ?d+1 ) ? {1, 10}.
! ? ?|||2 across 100 replicates for each
Figure 1 summarizes the resulting mean squared error |||?
2
of the different combinations of simulation parameters. Each method?s regularization parameter
varies over a range and the x-axis shows the (2, 1)-norm of the corresponding estimate. At the
right extreme, all methods essentially correspond to standard PCA. It is clear that regularization is
beneficial, because all the methods have significantly smaller MSE than standard PCA when they
are sufficiently sparse. Comparing between methods, we see that FPS dominates in all cases, but
the competition is much closer in the disjoint support case. Finally, all methods degrade when the
number of active variables or noise level increases.
6
Discussion
Estimating sparse principal subspaces in high-dimensions poses both computational and statistical
challenges. The contribution of this paper?a novel SDP based estimator, an efficient algorithm,
and strong statistical guarantees for a wide array of input matrices?is a significant leap forward on
both fronts. Yet, there are newly open problems and many possible extensions related to this work.
For instance, it would be interesting to investigate the performance of FPS a under weak, rather than
exact, sparsity assumption on ? (e.g., ?q , 0 < q ? sparsity). The optimization problem (1) and
ADMM algorithm can easily be modified handle other types of penalties. In some cases, extensions
of Theorem 3.1 would require minimal modifications to its proof. Finally, the choices of dimension
d and regularization parameter ? are of great practical interest. Techniques like cross-validation
need to be carefully formulated and studied in the context of principal subspace estimation.
Acknowledgments
This research was supported in part by NSF grants DMS-0903120, DMS-1309998, BCS-0941518,
and NIH grant MH057881.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
A. d?Aspremont et al. ?A direct formulation of sparse PCA using semidefinite programming ?. In: SIAM
Review 49.3 (2007).
I. M. Johnstone and A. Y. Lu. ?On consistency and sparsity for principal components analysis in high
dimensions ?. In: JASA 104.486 (2009), pp. 682?693.
I. T. Jolliffe, N. T. Trendafilov, and M. Uddin. ?A modified principal component technique based on the
Lasso ?. In: JCGS 12 (2003), pp. 531?547.
H. Zou, T. Hastie, and R. Tibshirani. ?Sparse principal component analysis ?. In: JCGS 15.2 (2006),
pp. 265?286.
H. Shen and J. Z. Huang. ?Sparse principal component analysis via regularized low rank matrix approximation ?. In: Journal of Multivariate Analysis 99 (2008), pp. 1015?1034.
D. M. Witten, R. Tibshirani, and T. Hastie. ?A penalized matrix decomposition, with applications to
sparse principal components and canonical correlation analysis ?. In: Biostatistics 10 (2009), pp. 515?
534.
M. Journee et al. ?Generalized power method for sparse principal component analysis ?. In: JMLR 11
(2010), pp. 517?553.
B. K. Sriperumbudur, D. A. Torres, and G. R. G. Lanckriet. ?A majorization-minimization approach to
the sparse generalized eigenvalue problem ?. In: Machine Learning 85.1?2 (2011), pp. 3?39.
Y. Zhang and L. E. Ghaoui. ?Large-scale sparse principal component analysis with application to text
data ?. In: NIPS 24. Ed. by J. Shawe-Taylor et al. 2011, pp. 532?539.
X. Yuan and T. Zhang. ?Truncated power method for sparse eigenvalue problems ?. In: JMLR 14 (2013),
pp. 899?925.
A. A. Amini and M. J. Wainwright. ?High-dimensional analysis of semidefinite relaxations for sparse
principal components ?. In: Ann. Statis. 37.5B (2009), pp. 2877?2921.
A. Birnbaum et al. ?Minimax bounds for sparse pca with noisy high-dimensional data ?. In: Ann. Statis.
41.3 (2013), pp. 1055?1084.
V. Q. Vu and J. Lei. ?Minimax rates of estimation for sparse PCA in high dimensions ?. In: AISTATS 15.
Ed. by N. Lawrence and M. Girolami. Vol. 22. JMLR W&CP. 2012, pp. 1278?1286.
Q. Berthet and P. Rigollet. ?Computational lower bounds for sparse PCA ?. In: (2013). arXiv: 1304.
0828.
L. Mackey. ?Deflation methods for sparse PCA ?. In: NIPS 21. Ed. by D. Koller et al. 2009, pp. 1017?
1024.
Z. Ma. ?Sparse principal component analysis and iterative thresholding ?. In: Ann. Statis. 41.2 (2013).
T. T. Cai, Z. Ma, and Y. Wu. ?Sparse PCA: optimal rates and adaptive estimation ?. In: Ann. Statis.
(2013). to appear. arXiv: 1211.1309.
V. Q. Vu and J. Lei. ?Minimax sparse principal subspace estimation in high dimensions ?. In: Ann. Statis.
(2013). to appear. arXiv: 1211.0373.
S. Boyd et al. ?Distributed optimization and statistical learning via the alternating direction method of
multipliers ?. In: Foundations and Trends in Machine Learning 3.1 (2010), pp. 1?122.
S. Ma. ?Alternating direction method of multipliers for sparse principal component analysis ?. In: (2011).
arXiv: 1111.6703.
R. Bhatia. Matrix analysis. Springer-Verlag, 1997.
J. Dattorro. Convex optimization & euclidean distance geometry. Meboo Publishing USA, 2005.
K. Fan. ?On a theorem of Weyl concerning eigenvalues of linear transformations I ?. In: Proceedings of
the National Academy of Sciences 35.11 (1949), pp. 652?655.
M. Overton and R. Womersley. ?On the sum of the largest eigenvalues of a symmetric matrix ?. In: SIAM
Journal on Matrix Analysis and Applications 13.1 (1992), pp. 41?45.
S. N. Negahban et al. ?A unified framework for the high-dimensional analysis of M -estimators with
decomposable regularizers ?. In: Statistical Science 27.4 (2012), pp. 538?557.
H. Liu et al. ?High-dimensional semiparametric gaussian copula graphical models ?. In: Ann. Statis.
40.4 (2012), pp. 2293?2326.
W. H. Kruskal. ?Ordinal measures of association ?. In: JASA 53.284 (1958), pp. 814?861.
H. Liu, J. Lafferty, and L. Wasserman. ?The nonparanormal: semiparametric estimation of high dimensional undirected graphs ?. In: JMLR 10 (2009), pp. 2295?2328.
F. Han and H. Liu. ?Transelliptical component analysis ?. In: NIPS 25. Ed. by P. Bartlett et al. 2012,
pp. 368?376.
F. Lindskog, A. McNeil, and U. Schmock. ?Kendall?s tau for elliptical distributions ?. In: Credit Risk.
Ed. by G. Bol et al. Contributions to Economics. Physica-Verlag HD, 2003, pp. 149?156.
9
| 5136 |@word version:3 polynomial:2 seems:1 norm:11 open:1 simulation:5 covariance:17 decomposition:2 tr:4 reduction:2 initial:1 plentiful:1 uncovered:1 dspca:6 liu:3 ours:2 nonparanormal:3 past:1 existing:2 elliptical:2 com:1 comparing:2 gmail:1 scatter:1 yet:1 subsequent:1 numerical:1 weyl:2 statis:6 update:3 mackey:1 probablity:2 provides:2 characterization:1 complication:1 zhang:2 along:1 direct:1 fps:6 yuan:1 shorthand:1 prove:1 consists:1 combine:1 inside:1 roughly:1 sdp:4 gpower:2 little:1 pf:5 conv:1 becomes:1 estimating:2 bounded:1 moreover:2 notation:1 factorized:1 biostatistics:1 what:1 unified:1 transformation:4 guarantee:7 fantope:12 scaled:1 grant:2 superiority:1 appear:2 positive:2 before:1 engineering:1 consequence:1 analyzing:2 id:3 therein:1 initialization:1 studied:1 dynamically:1 limited:3 range:2 unique:1 practical:1 acknowledgment:1 testing:1 vu:3 block:1 procedure:4 y2i:1 empirical:1 significantly:1 projection:14 boyd:1 get:1 cannot:2 onto:3 selection:4 close:1 operator:2 context:2 risk:1 restriction:2 equivalent:2 deterministic:2 lagrangian:1 straightforward:1 economics:1 convex:13 focused:1 formulate:1 shen:1 decomposable:1 recovery:2 wasserman:1 m2:2 estimator:12 insight:4 array:2 y2j:1 orthonormal:1 spanned:2 hd:1 population:4 handle:1 notion:2 variation:4 coordinate:1 analogous:1 exact:1 programming:1 us:1 lanckriet:1 trend:1 satisfying:1 updating:1 solved:2 contemporary:1 convexity:1 ui:4 depend:1 tight:1 solving:5 rewrite:1 efficiency:2 basis:2 easily:1 regularizer:1 distinct:2 describe:1 shortcoming:1 effective:1 bhatia:1 pearson:3 choosing:1 whose:3 larger:1 solve:1 say:1 kahan:2 noisy:3 advantage:2 eigenvalue:17 cai:1 propose:4 interaction:1 product:1 maximal:1 remainder:1 mb:2 deflating:1 degenerate:1 academy:1 frobenius:3 competition:1 ky:2 exploiting:1 convergence:3 optimum:1 jing:1 extending:1 wider:1 derive:1 develop:1 illustrate:1 pose:1 stat:3 depending:1 ij:13 minor:1 sole:1 p2:3 strong:2 involves:1 implies:2 come:1 convention:2 girolami:1 direction:8 drawback:1 closely:1 modifying:1 hull:1 subsequently:1 explains:1 require:4 f1:2 fix:1 extension:3 strictly:2 physica:1 hold:3 sufficiently:1 considered:2 credit:1 exp:2 great:1 lawrence:1 algorithmic:2 major:1 achieves:1 optimizer:1 adopt:1 kruskal:1 estimation:10 leap:1 extremal:3 largest:9 tool:1 minimization:1 gaussian:8 modified:2 rather:2 varying:1 corollary:13 derived:2 focus:2 rank:11 stopping:2 koller:1 issue:1 dual:4 arg:2 priori:1 retaining:1 development:2 constrained:1 special:1 copula:1 spc:2 marginal:2 equal:2 never:1 sampling:2 unsupervised:1 nearly:2 uddin:1 np:3 t2:1 piecewise:1 recommend:1 few:1 simultaneously:1 national:1 individual:1 geometry:2 freedom:1 interest:2 unimprovable:1 investigate:1 rectifies:1 replicates:2 weakness:1 analyzed:1 extreme:1 semidefinite:8 light:1 behind:2 primal:1 regularizers:1 kt:2 ambient:1 overton:1 closer:1 necessary:2 experience:1 respective:1 orthogonal:1 euclidean:2 taylor:1 abundant:1 re:1 desired:1 theoretical:8 minimal:1 instance:1 column:1 soft:2 ordinary:3 applicability:3 deviation:1 subset:1 entry:5 decomposability:1 conducted:1 front:1 nearoptimal:1 varies:1 proximal:1 cho:1 negahban:1 randomized:1 siam:2 retain:1 together:1 squared:3 huang:1 wishart:1 leading:1 return:1 account:1 de:1 includes:1 satisfy:4 depends:1 sine:1 closed:1 kendall:11 recover:1 complicated:1 identifiability:2 defer:1 majorization:1 contribution:4 minimize:1 square:1 variance:4 efficiently:2 correspond:2 yield:2 conceptually:1 weak:1 vincent:1 lu:1 worth:1 explain:1 simultaneous:1 minj:1 suffers:1 ed:5 against:1 sriperumbudur:1 pp:22 obvious:1 dm:2 meboo:1 proof:6 associated:1 y1p:2 newly:1 popular:1 covariation:1 subtle:1 routine:1 carefully:1 follow:1 specify:1 entrywise:1 formulation:1 just:2 correlation:11 until:1 reveal:1 lei:3 usa:1 verify:1 multiplier:4 former:1 regularization:7 equality:1 alternating:4 symmetric:8 iteratively:2 nonzero:4 deal:1 sin:5 encourages:1 uniquely:1 essence:1 davis:2 criterion:2 generalized:2 demonstrate:2 cp:1 fj:5 meaning:1 variational:2 ohio:1 novel:5 recently:2 fi:4 nih:1 womersley:1 witten:1 rigollet:1 extend:1 association:1 m1:2 relating:1 interpret:1 elementwise:2 mellon:1 significant:1 tuning:3 unconstrained:1 consistency:2 shawe:1 entail:1 han:1 curvature:4 multivariate:3 showed:2 manipulation:1 verlag:2 inequality:1 exploited:1 additional:7 ey:1 maximize:2 redundant:1 signal:1 ii:1 multiple:3 bcs:1 reduces:1 technical:1 faster:1 cross:1 concerning:1 y1j:4 equally:1 essentially:2 arxiv:4 iteration:1 tailored:1 addition:1 semiparametric:2 crucial:1 extra:1 eliminates:1 ascent:1 subject:3 undirected:1 inconsistent:1 lafferty:1 effectiveness:1 call:1 near:5 noting:1 easy:1 variety:1 variate:1 hastie:2 lasso:1 hindered:1 inner:1 idea:2 reduce:1 pca:24 bartlett:1 penalty:5 algebraic:1 xjj:1 jj:7 generally:1 useful:1 tij:2 eigenvectors:9 clear:1 nonparametric:2 rowwise:1 exist:1 canonical:2 nsf:1 sign:9 estimated:1 disjoint:3 tibshirani:2 ytj:3 carnegie:1 write:1 vol:1 key:3 wisc:2 clarity:1 penalizing:1 birnbaum:1 invention:1 graph:1 relaxation:6 vqv:1 monotone:4 cone:1 sum:4 mcneil:1 angle:1 extends:1 place:3 reasonable:1 wu:1 separation:1 appendix:1 prefer:1 summarizes:2 bound:9 fever:1 correspondence:1 fan:3 activity:1 strength:1 orthogonality:2 constraint:5 y1i:3 transelliptical:3 optimality:2 span:2 min:3 relatively:2 combination:2 across:2 beneficial:1 smaller:1 modification:2 lt2:1 spiked:6 multiplicity:1 restricted:2 sij:1 invariant:1 ghaoui:1 taken:1 computationally:2 equation:2 turn:2 deflation:7 jolliffe:1 ordinal:1 tractable:1 jcgs:2 cor:3 studying:1 generalizes:1 obey:1 away:1 spectral:2 generic:1 amini:1 lindskog:1 alternative:1 running:1 include:1 publishing:1 graphical:3 madison:2 exploit:1 restrictive:2 establish:2 objective:3 quantity:1 primary:1 dependence:2 usual:2 diagonal:6 planted:1 kth:1 subspace:31 distance:1 accommodates:1 degrade:1 trivial:1 reason:1 provable:1 assuming:1 minimizing:2 equivalently:1 difficult:1 mostly:1 y11:2 design:1 proper:1 observation:4 finite:1 truncated:1 immediate:1 situation:1 ysj:3 extended:1 arbitrary:3 sharp:1 introduced:1 dattorro:1 connection:1 journee:1 nip:3 address:1 dth:1 below:1 pattern:1 appeared:1 sparsity:9 fp:2 challenge:1 program:3 interpretability:1 tau:11 including:1 max:6 wainwright:1 power:2 suitable:1 difficulty:1 regularized:2 indicator:1 scarce:1 residual:1 minimax:9 scheme:2 axis:1 aspremont:2 extract:1 text:1 review:1 wisconsin:2 embedded:1 loss:1 rationale:1 interesting:1 analogy:1 var:1 validation:1 eigendecomposition:1 foundation:1 degree:1 jasa:2 sufficient:1 thresholding:8 principle:2 karl:1 row:5 compatible:1 summary:1 penalized:1 repeat:1 supported:1 johnstone:1 wide:5 taking:2 absolute:1 sparse:32 distributed:1 dimension:11 forward:1 berthet:1 refinement:1 adaptive:1 emphasize:1 unreliable:2 clique:1 keep:1 global:5 active:1 continuous:1 iterative:4 bol:1 decade:1 sk:3 chief:1 robust:2 symmetry:1 mse:2 necessarily:2 zou:1 diag:1 aistats:1 main:3 motivation:1 s2:1 noise:7 body:3 augmented:1 torres:1 sub:5 invocation:1 jmlr:4 theorem:18 rohe:1 showing:1 deflated:1 dominates:1 intractable:2 exists:2 arcsin:1 gap:2 easier:1 vii:2 led:1 ysi:3 likely:1 simply:1 expressed:1 springer:1 trendafilov:1 truth:1 satisfies:2 ma:5 identity:1 formulated:1 consequently:1 ann:6 shared:2 admm:12 yti:3 hard:2 typical:1 uniformly:1 principal:35 lemma:10 called:4 osu:1 succeeds:1 exception:1 nt2:1 support:8 latter:1 brevity:1 phenomenon:1 |
4,572 | 5,137 | One-shot learning and big data with n = 2
Dean P. Foster
University of Pennsylvania
Philadelphia, PA
[email protected]
Lee H. Dicker
Rutgers University
Piscataway, NJ
[email protected]
Abstract
We model a ?one-shot learning? situation, where very few observations
y1 , ..., yn ? R are available. Associated with each observation yi is a very highdimensional vector xi ? Rd , which provides context for yi and enables us to predict subsequent observations, given their own context. One of the salient features
of our analysis is that the problems studied here are easier when the dimension
of xi is large; in other words, prediction becomes easier when more context is
provided. The proposed methodology is a variant of principal component regression (PCR). Our rigorous analysis sheds new light on PCR. For instance, we show
that classical PCR estimators may be inconsistent in the specified setting, unless
they are multiplied by a scalar c > 1; that is, unless the classical estimator is expanded. This expansion phenomenon appears to be somewhat novel and contrasts
with shrinkage methods (c < 1), which are far more common in big data analyses.
1
Introduction
The phrase ?one-shot learning? has been used to describe our ability ? as humans ? to correctly
recognize and understand objects (e.g. images, words) based on very few training examples [1, 2].
Successful one-shot learning requires the learner to incorporate strong contextual information into
the learning algorithm (e.g. information on object categories for image classification [1] or ?function
words? used in conjunction with a novel word and referent in word-learning [3]). Variants of oneshot learning have been widely studied in literature on cognitive science [4, 5], language acquisition
(where a great deal of relevant work has been conducted on ?fast-mapping?) [3, 6?8], and computer
vision [1, 9]. Many recent statistical approaches to one-shot learning, which have been shown to
perform effectively in a variety of examples, rely on hierarchical Bayesian models, e.g. [1?5, 8].
In this article, we propose a simple latent factor model for one-shot learning with continuous outcomes. We propose effective methods for one-shot learning in this setting, and derive risk approximations that are informative in an asymptotic regime where the number of training examples n
is fixed (e.g. n = 2) and the number of contextual features for each example d diverges. These
approximations provide insight into the significance of various parameters that are relevant for oneshot learning. One important feature of the proposed one-shot setting is that prediction becomes
?easier? when d is large ? in other words, prediction becomes easier when more context is provided.
Binary classification problems that are ?easier? when d is large have been previously studied in
the literature, e.g. [10?12]; this article may contain the first analysis of this kind with continuous
outcomes.
The methods considered in this paper are variants of principal component regression (PCR) [13].
Principal component analysis (PCA) is the cornerstone of PCR. High-dimensional PCA (i.e. large
d) has been studied extensively in recent literature, e.g. [14?22]. Existing work that is especially
relevant for this paper includes that of Lee et al. [19], who studied principal component scores in
high dimensions, and work by Hall, Jung, Marron and co-authors [10, 11, 18, 21], who have studied
?high dimension, low sample size? data, with fixed n and d ? ?, in a variety of contexts, including
1
PCA. While many of these results address issues that are clearly relevant for PCR (e.g. consistency or inconsistency of sample eigenvalues and eigenvectors in high dimensions), their precise
implications for high-dimensional PCR are unclear.
In addition to addressing questions about one-shot learning, which motivate the present analysis,
the results in this paper provide new insights into PCR in high dimensions. We show that the classical PCR estimator is generally inconsistent in the one-shot learning regime, where n is fixed and
d ? ?. To remedy this, we propose a bias-corrected PCR estimator, which is obtained by expanding the classical PCR estimator (i.e. multiplying it by a scalar c > 1). Risk approximations obtained
in Section 5 imply that the bias-corrected estimator is consistent when n is fixed and d ? ?. These
results are supported by a simulation study described in Section 7, where we also consider an ?oracle? PCR estimator for comparative purposes. It is noteworthy that the bias-corrected estimator is an
expanded version of the classical estimator. Shrinkage, which would correspond to multiplying the
classical estimator by a scalar 0 ? c < 1, is a far more common phenomenon in high-dimensional
data analysis, e.g. [23?25] (however, expansion is not unprecedented; Lee et al. [19] argued for
bias-correction via expansion in the analysis of principal component scores).
2
Statistical setting
Suppose that the observed data consists of (y1 , x1 ), ..., (yn , xn ), where yi ? R is a scalar outcome
and xi ? Rd is an associated d-dimensional ?context? vector, for i = 1, ..., n. Suppose that yi and
xi are related via
yi
xi
= hi ? + ?i ? R, hi ? N (0, ? 2 ), ?i ? N (0, ? 2 ),
?
= hi ? du + i ? Rd , i ? N (0, ? 2 I), i = 1, ..., n.
(1)
(2)
The random variables hi , ?i and the random vectors i = (i1 , ..., id )T , 1 ? i ? n, are all assumed
to be independent; hi is a latent factor linking the outcome yi and the vector xi ; ?i and i are
random noise. The unit vector u = (u1 , ..., ud )T ? Rd and real numbers
? ?, ? ? R are taken to be
non-random. It is implicit in our normalization that the ?x-signal? ||hi ? du||2 d is quite strong.
Observe that (yi , xi ) ? N (0, V ) are jointly normal with
?
2 2
? ? ?
+ ?2
?? 2 ? duT
V =
.
?? 2 ? du ? 2 I + ? 2 ? 2 duuT
(3)
n
To further simplify notation in what follows, let y = (y1 , ..., yn )T = h?
?+ ? T? R , where h =
T
T
n
T
(h1 , ..., hn ) , ? = (?1 , ..., ?n ) ? R , and let X = (x1 , ..., xn ) = ? dhu + E, where E =
(ij )1?i?n, 1?j?d .
Given the observed data (y, X), our objective is to devise prediction rules y? : Rd ? R so that the
risk
RV (?
y ) = EV {?
y (xnew ) ? ynew }2 = EV {?
y (xnew ) ? hnew ?}2 + ? 2
(4)
?
is small, where (ynew , xnew ) = (hnew ? + ?new , hnew ? du + new ) has the same distribution as
(yi , xi ) and is independent of (y, X). The subscript ?V ? in RV and EV indicates that the parameters
?, ?, ?, ?, ?, u are specified by V , as in (3); similarly, we will write PV (?) to denote probabilities with
the parameters specified by V .
We are primarily interested in identifying methods y? that perform well (i.e. RV (?
y ) is small) in
an asymptotic regime whose key features are (i) n is fixed, (ii) d ? ?, (iii) ? 2 ? 0, and (iv)
inf ? 2 ? 2 /? 2 > 0. We suggest that this regime reflects a one-shot learning setting, where n is small
and d is large (captured by (i)-(ii) from the previous sentence), and there is abundant contextual
information for predicting future outcomes (which is ensured by (iii)-(iv)). In a specified asymptotic
regime (not necessarily the one-shot regime), we say that a prediction method y? is consistent if
RV (?
y ) ? 0. Weak consistency is another type of consistency that is considered below. We say that
y? is weakly consistent if |?
y ? ynew | ? 0 in probability. Clearly, if y? is consistent, then it is also
weakly consistent.
2
3
Principal component regression
By assumption,
the data (yi , xi ) are multivariate normal. Thus, EV (yi |xi ) = xTi ?, where ? =
?
??? 2 du/(? 2 + ? 2 ? 2 d). This suggests studying linear prediction rules of the form y?(xnew ) =
? for some estimator ?
? of ?. In this paper, we restrict our attention to linear prediction rules,
xTnew ?,
focusing on estimators related to principal component regression (PCR).
? 1 , ..., u
? n?d deLet l1 ? ? ? ? ? ln?d ? 0 denote the ordered n largest eigenvalues of X T X and let u
? 1 , ..., u
? n?d are also referred to as the ?principal
note corresponding eigenvectors with unit length; u
? k ) be the d ? k matrix with columns given by u
? 1 , ..., u
?k,
components? of X. Let Uk = (?
u1 ? ? ? u
for 1 ? k ? n ? d. In its most basic form, principal component regression involves regressing y
? = Uk (U T X T XUk )?1 U T X T y. In the problem
on XUk for some (typically small) k, and taking ?
k
k
considered here the predictor covariance matrix Cov(xi ) = ? 2 I + ? 2 ? 2 duuT has a single eigenvalue larger than ? 2 and the corresponding eigenvector is parallel to ?. Thus, it is natural to restrict
our attention to PCR with k = 1; more explicitly, consider
?
?
pcr =
? T1 X T y
u
?1
u
T
?1 XT Xu
?1
u
=
1 T T
? X y?
u
u1 .
l1 1
(5)
?
In the following sections, we study consistency and risk properties of ?
pcr and related estimators.
4
Weak consistency and big data with n = 2
Before turning our attention to risk approximations for PCR in Section 5 below (which contains
the paper?s main technical contributions), we discuss weak consistency in the one-shot asymptotic
regime, devoting special attention to the case where n = 2. This serves at least two purposes. First,
it provides an illustrative warm-up for the more complex risk bounds obtained in Section 5. Second,
it will become apparent below that the risk of the consistent PCR methods studied in this paper
depends on inverse moments of ?2 random variables. For very small n, these inverse moments do
not exist and, consequently, the risk of the associated prediction methods may be infinite. The main
implication of this is that the risk bounds in Section 5 require n ? 9 to ensure their validity. On the
other hand, the weak consistency results obtained in this section are valid for all n ? 2.
4.1
Heuristic analysis for n = 2
?
Recall the PCR estimator (5) and let y?pcr (x) = xT ?
pcr be the associated linear prediction rule.
T
For n = 2, the largest eigenvalue of X X and the corresponding eigenvector are given by simple
explicit formulas:
q
1
2
2
T
2
2
2
2
||x1 || + ||x2 || + (||x1 || ? ||x2 || ) + 4(x1 x2 )
l1 =
2
?1 = v
? 1 /||?
and u
v1 ||2 , where
q
1
2
2
2 ? ||x ||2 )2 + 4(xT x )2 x + x .
?1 =
||x
||
?
||x
||
+
(||x
||
v
1
2
1
2
2
1
2
1
2xT1 x2
?
? 1 yield an explicit expression for ?
These expressions for l1 and u
pcr when n = 2 and facilitate
a simple heuristic analysis of PCR, which we undertake in this subsection. This analysis suggests
that y?pcr is not consistent when ? 2 ? 0 and d ? ? (at least for n = 2). However, the analysis
?
also suggests that consistency can be achieved by multiplying ?
pcr by a scalar c ? 1; that is, by
?
expanding ? pcr . This observation leads us to consider and rigorously analyze a bias-corrected PCR
method, which we ultimately show is consistent in fixed n settings, if ? 2 ? 0 and d ? ?. On the
other hand, it will also be shown below that y?pcr is inconsistent in one-shot asymptotic regimes.
For large d, the basic approximations ||xi ||2 ? ? 2 dh21 + ? 2 d and xT1 x2 ? ? 2 dhi hj lead to the
following approximation for y?pcr (xnew ):
?
y?pcr (xnew ) = xTnew ?
pcr ?
? 2 (h21 + h22 )
hnew ? + epcr ,
? 2 (h21 + h22 ) + ? 2
3
(6)
where
epcr =
? 2 hnew
? T X T ?.
u
+ h22 ) + ? 2 d}2 1
{? 2 d(h21
Thus,
?2
hnew ? + epcr ? ?new .
(7)
+ h22 ) + ? 2
The second and third terms on the right-hand side in (7), epcr ? ?new , represent a random error
that vanishes as d ? ? and ? 2 ? 0. On the other hand, the first term on the right-hand side in
(7), ?? 2 hnew ?/{? 2 (h21 + h22 ) + ? 2 }, is a bias term that is, in general, non-zero when d ? ? and
? 2 ? 0; in other words y?pcr is inconsistent. This bias is apparent in the expression for y?pcr (xnew )
given in (6); in particular, the first term on the right-hand side of (6) is typically smaller than hnew ?.
?
One way to correct for the bias of y?pcr is to multiply ?
pcr by
y?pcr (xnew ) ? ynew ? ?
? 2 (h21
l1
? 2 (h21 + h22 ) + ? 2
? 1,
?
l1 ? l2
? 2 (h21 + h22 )
where
q
1
2
2
T
2
2
2
2
l2 =
||x1 || + ||x2 || ? (||x1 || ? ||x2 || ) + 4(x1 x2 ) ? ? 2 d
2
is the second-largest eigenvalue of X T X. Define the bias-corrected principal component regression
estimator
1
? = l1 ?
?
?T XT y
?
u
bc
pcr =
l1 ? l2
l1 ? l2 1
? be the associated linear prediction rule. Then y?bc (xnew ) = xT ?
?
and let y?bc (x) = xT ?
bc
new bc ?
hnew ? + ebc , where
hnew
? T X T ?.
ebc = 2 2
u
{? (h1 + h22 ) + ? 2 }(h21 + h22 )d2 1
One can check that if d ? ?, ? 2 ? 0 and ?, ? 2 , ? 2 , ? 2 are well-behaved (e.g. contained in a
compact subset of (0, ?)), then y?bc (xnew ) ? ynew ? ebc ? 0 in probability; in other words, y?bc
is weakly consistent. Indeed, weak consistency of y?bc follows from Theorem 1 below. On the other
hand, note that E|ebc | = ?. This suggests that RV (?
ybc ) = ?, which in fact may be confirmed by
direct calculation. Thus, when n = 2, y?bc is weakly consistent, but not consistent.
4.2
Weak consistency for bias-corrected PCR
Now suppose that n ? 2 is arbitrary and that d ? n. Define the bias-corrected PCR estimator
1
? = l1 ?
?
? T X T y?
?
=
u
u1
(8)
bc
l1 ? ln pcr
l1 ? ln 1
? . The main weak consistency result of the
and the associated linear prediction rule y?bc (x) = xT ?
bc
paper is given below.
Theorem 1. Suppose that n ? 2 is fixed and let C ? (0, ?) be a compact set. Let r > 0 be an
arbitrary but fixed positive real number. Then
lim
sup PV {|?
ybc (xnew ) ? ynew | > r} = 0.
(9)
d?? ?,?,?,??C
? 2 ?0
u?Rd
On the other hand,
lim inf
inf
d?? ?,?,?,??C
? 2 ?0
u?Rd
PV {|?
ypcr (xnew ) ? ynew | > r} > 0.
(10)
A proof of Theorem 1 follows easily upon inspection of the proof of Theorem 2, which may be
found in the Supplementary Material. Theorem 1 implies that in the specified fixed n asymptotic
setting, bias-corrected PCR is weakly consistent (9) and that the more standard PCR method y?pcr
is inconsistent (10). Note that the condition ?, ?, ?, ? ? C in (9) ensures that the x-data signal-tonoise ratio ? 2 ? 2 /? 2 is bounded away from 0. In (8), it is noteworthy that l1 /(l1 ? ln ) ? 1: in
? is obtained by expanding ?
? .
order to achieve (weak) consistency, the bias corrected estimator ?
bc
pcr
By contrast, shrinkage is a far more common method for obtaining improved estimators in many
regression and prediction settings (the literature on shrinkage estimation is vast, perhaps beginning
with [23]).
4
5
Risk approximations and consistency
In this section, we present risk approximations for y?pcr and y?bc that are valid when n ? 9. A more
careful analysis may yield approximations that are valid for smaller n; however, this is not pursued
further here.
Theorem 2. Let Wn ? ?2n be a chi-squared random variable with n degrees of freedom.
(a) If n ? 9 and d ? 1, then
RV (?
ypcr )
2r
? 4 ? 4 Wn
?
n
= ? 1+E
+O
(? 2 ? 2 Wn + ? 2 )2
n d+n
2 2 2
?
?
?
2
2 2
T
2
? 1) ? 1 + O
.
+? ? EV (u u
?2 ? 2 d + ? 2
2
(b) If d ? n ? 9, then
(
RV (?
ybc ) = ?
2
!)
?2 ? 2
1+E
(
?2
?
dn
!)
?2
p
p
+O
? 2 ? 2 Wn + ? 2 n/d
? 2 ? 2 n + ? 2 n/d
2
l1
? 1 )2 ? 1
+?2 ? 2 EV
(uT u
(11)
l1 ? ln
!)
(
?2 ? 2 + ? 2
?2 ?2 ? 2
p
(12)
1+E
+ 2 2
? ? d + ?2
? 2 ? 2 Wn + ? 2 n/d
"
)#
r (
?2 ?2 ? 2
?2
n
?4
p
p
+O 2 2
+
.
? ? d + ? 2 d ? 2 ? 2 n + ? 2 n/d (? 2 ? 2 n + ? 2 n/d)2
A proof of Theorem 2 (along with intermediate lemmas and propositions) may be found in the
Supplementary Material. The necessity of the more complex error term in Theorem 2 (b) (as opposed
to that in part (a)) will become apparent below.
When d is large, ? 2 is small, and ?, ?, ?, ? ? C, for some compact subset C ? (0, ?), Theorem 2
suggests that
2
? 1 )2 ? 1 ,
RV (?
ypcr ) ? ?2 ? 2 EV (uT u
2
l1
? 1 )2 ? 1 .
(uT u
RV (?
ybc ) ? ?2 ? 2 EV
l1 ? ln
Thus, consistency of y?pcr and y?bc in the one-shot regime hinges on asymptotic properties of
? 1 )2 ? 1}2 and EV {l1 /(l1 ? ln )(uT u
? 1 )2 ? 1}2 . The following proposition is proved
EV {(uT u
in the Supplementary Material.
Proposition 1. Let Wn ? ?2n be a chi-squared random variable with n degrees of freedom.
(a) If n ? 9 and d ? 1, then
EV
2
? 1) ? 1 = E
(u u
T
2
?2
? 2 ? 2 Wn + ? 2
2
r
+O
n
d+n
.
(b) If d ? n ? 9, then
)
(
2
l1
n
?4
T
2
p
? 1) ? 1 = O
EV
(u u
?
.
l1 ? ln
d (? 2 ? 2 n + ? 2 n/d)2
? 1 )2 ? 1}2 ? E{? 2 /(? 2 ? 2 Wn +
Proposition 1 (a) implies that in the one-shot regime, EV {(uT u
2 2
? ) }=
6 0; by Theorem 2 (a), it follows that y?pcr is inconsistent. On the other hand, Proposition 1
2
? 1 )2 ? 1 ? 0 in the one-shot regime; thus, by Theorem 2
(b) implies that EV l1 /(l1 ? ln )(uT u
(b), y?bc is consistent. These results are summarized in Corollary 1, which follows immediately from
Theorem 2 and Proposition 1.
5
Corollary 1. Suppose that n ? 9 is fixed and let C ? (0, ?) be a compact set. Let Wn ? ?2n be a
chi-squared random variable with n degrees of freedom. Then
2
?2
2 2
ypcr ) ? ? ? E
lim
sup RV (?
= 0.
2?2W + ? 2
d?? ?,?,?,??C
?
n
2
? ?0
u?Rd
and
lim
sup
d?? ?,?,?,??C
? 2 ?0
u?Rd
RV (?
ybc ) = 0.
For fixed n, and inf ? 2 ? 2 /? 2 > 0, the bound in Proposition 1 (b) is of order 1/d. This suggests that
both terms (11)-(12) in Theorem 2 (b) have similar magnitude and, consequently, are both necessary
to obtain accurate approximations for RV (?
ybc ). (It may be desirable to obtain more accurate approx
2
T? 2
imations for EV l1 /(l1 ? ln )(u u1 ) ? 1 ; this could potentially be leveraged to obtain better
approximations for RV (?
ybc ).) In Theorem 2 (a), the only non-vanishing term in the one-shot ap? 1 )2 ? 1}2 ; this helps to explain the relative simplicity
proximation for RV (?
ypcr ) involves EV {(uT u
of this approximation, in comparison with Theorem 2 (b).
Theorem 2 and Proposition 1 give risk approximations that are valid for all d and n ? 9. However, as illustrated by Corollary 1, these approximations are most effective in a one-shot asymptotic
setting, where n is fixed and d is large. In the one-shot regime, standard concepts, such as sample complexity ? roughly, the sample size n required to ensure a certain risk bound ? may be of
secondary importance. Alternatively, in a one-shot setting, one might be more interested in metrics
like ?feature complexity?: the number of features d required to ensure a given risk bound. Approximate feature complexity for y?bc is easily computed using Theorem 2 and Proposition 1 (clearly,
feature complexity depends heavily on model parameters, such as ?, the y-data noise level ? 2 , and
the x-data signal-to-noise ratio ? 2 ? 2 /? 2 ).
6
An oracle estimator
In this section, we discuss a third method related to y?pcr and y?bc , which relies on information that
is typically not available in practice. Thus, this method is usually non-implementable; however, we
believe it is useful for comparative purposes.
? 1 , which may be viewed as
Recall that both y?bc and y?pcr depend on the first principal component u
an estimate of u. If an oracle provides knowledge of u in advance, then it is natural to consider the
oracle PCR estimator
T
T
? = u X y u
?
or
uT X T Xu
? . A basic calculation yields the following
and the associated linear prediction rule y?or (x) = xT ?
or
result.
Proposition 2. If n ? 3, then
RV (?
yor ) =
?2 +
?2 ?2 ? 2
2
? ?2d + ? 2
1+
1
n?2
.
Clearly, y?or is consistent in the one-shot regime: if C ? (0, ?) is compact and n ? 3 is fixed, then
lim
sup
d?? ?,?,?,??C
? 2 ?0
u?Rd
7
RV (?
yor ) = 0.
Numerical results
In this section, we describe the results of a simulation study where we compared the performance of
y?pcr , y?bc , and y?or . We fixed ? = 4, ? 2 = 1/10, ? 2 = 4, ? 2 = 1/4, ? 2 = 1, and u = (1, 0, ...., 0) ?
6
Rd and simulated 1000 independent datasets with various d, n. Observe that ? 2 ? 2 /? 2 = 1. For each
? ,?
? ,?
? and the corresponding conditional prediction error
simulated dataset, we computed ?
pcr
bc
or
RV (?
y |y, X) = E {?
y (xnew ) ? ynew }2 y, X
?2 ?2
,
?2 d + 1
for y? = y?pcr , y?bc , y?or . The empirical prediction error for each method y? was then computed by averaging RV (?
y |y, X) over all 1000 simulated datasets. We also computed the?theoretical? prediction
error for each method, using the results from Sections 5-6, where appropriate. More specifically, for
y?pcr and y?bc , we used the leading terms of the approximations in Theorem 2 and Proposition 1 to
obtain the theoretical prediction error; for y?or , we used the formula given in Proposition 2 (see Table
1 for more details). Finally, we computed the relative error between the empirical prediction error
? ? ?)T (? 2 I + ? 2 ? 2 duuT )(?
? ? ?) + ? 2 +
(?
=
Table 1: Formulas for theoretical prediction error used in simulations (derived from Theorem 2 and
Propositions 1-2). Expectations in theoretical prediction error expressions for y?pcr and y?bc were
computed empirically.
y?pcr
y?bc
y?or
Theoretical prediction error formula
oi
2
n
4 4
? Wn
?2
2 2
? 1 + E (?2 ??2 W
+
?
?
E
2
2
2
2
2
? ? Wn +?
n +? )
o2
n
?2 ? 2 ?
l1
2
2 2
T? 2
? 1+E
+
?
?
E
(u
u
)
?
1
V
1
l1 ?ln
? 2 ? 2 Wn +? 2 n/d
2 2 2
2 2
2
?
? ? +??
+ ?2?? 2?d+?
1+E
2
? 2 ? 2 Wn +? 2 n/d
2 2 2
?
1
? 2 + ?2?? 2?d+?
1 + n?2
2
2
h
and the theoretical prediction error for each method,
(Empirical PE) ? (Theoretical PE)
? 100%.
Relative Error =
Empirical PE
Table 2: d = 500. Prediction error for y?pcr (PCR), y?bc (Bias-corrected PCR), and y?or (oracle). Relative error for comparing Empirical PE and Theoretical PE is given in parentheses. ?NA? indicates
that Theoretical PE values are unknown.
Bias-corrected
PCR
Oracle
PCR
n = 2 Empirical PE
18.7967
4.8668
1.5836
Theoretical PE (Relative Error) NA
?
(?)
?
(?)
n = 4 Empirical PE
6.4639
0.8023
0.3268
Theoretical PE (Relative Error) NA
NA
0.3416 (4.53%)
n = 9 Empirical PE
1.4187
0.3565
0.2587
Theoretical PE (Relative Error) 1.2514 (11.79%) 0.2857 (19.86%) 0.2603 (0.62%)
n = 20 Empirical PE
0.4513
0.2732
0.2398
Theoretical PE (Relative Error) 0.2987 (33.81%) 0.2497 (8.60%) 0.2404 (0.25%)
The results of the simulation study are summarized in Tables 2-3. Observe that y?bc has smaller
empirical prediction error than y?pcr in every setting considered in Tables 2-3, and y?bc substantially
outperforms y?pcr in most settings. Indeed, the empirical prediction error for y?bc when n = 9 is
smaller than that of y?pcr when n = 20 (for both d = 500 and d = 5000); in other words, y?bc
outperforms y?pcr , even when y?pcr has more than twice as much training data. Additionally, the
empirical prediction error of y?bc is quite close to that of the oracle method y?or , especially when n
is relatively large. These results highlight the effectiveness of the bias-corrected PCR method y?bc in
settings where ? 2 and n are small, ? 2 ? 2 /? 2 is substantially larger than 0, and d is large.
For n = 2, 4, theoretical prediction error is unavailable in some instances. Indeed, while Proposition
2 and the discussion in Section 4 imply that if n = 2, then RV (?
ybc ) = RV (?
yor ) = ?, we have not
7
Table 3: d = 5000. Prediction error for y?pcr (PCR), y?bc (Bias-corrected PCR), and y?or (oracle).
Relative error comparing Empirical PE and Theoretical PE is given in parentheses. ?NA?? indicates
that Theoretical PE values are unknown.
Bias-corrected
PCR
Oracle
PCR
n = 2 Empirical PE
17.9564
2.0192
1.0316
Theoretical PE (Relative Error) NA
?
(?)
?
(?)
n = 4 Empirical PE
6.1220
0.2039
0.1637
Theoretical PE (Relative Error) NA
NA
0.1692 (3.36%)
n = 9 Empirical PE
1.2274
0.1378
0.1281
Theoretical PE (Relative Error) 1.2485 (1.72%) 0.1314 (4.64%) 0.1289 (0.62%)
n = 20 Empirical PE
0.3150
0.1226
0.1189
Theoretical PE (Relative Error) 0.2997 (4.86%) 0.1200 (2.12%) 0.1191 (0.17%)
pursued an expression for RV (?
ypcr ) when n = 2 (it appears that RV (?
ypcr ) < ?); furthermore, the
approximations in Theorem 2 for RV (?
ypcr ), RV (?
ybc ) do not apply when n = 4. In instances where
theoretical prediction error is available, is finite, and d = 500, the relative error between empirical
and theoretical prediction error for y?pcr and y?bc ranges from 8.60%-33.81%; for d = 5000, it ranges
from 1.72%-4.86%. Thus, the accuracy of the theoretical prediction error formulas tends to improve
as d increases, as one would expect. Further improved measures of theoretical prediction error
for y?pcr and y?bc could potentially be obtained by refining the approximations in Theorem 2 and
Proposition 1.
8
Discussion
In this article, we have proposed bias-corrected PCR for consistent one-shot learning in a simple
latent factor model with continuous outcomes. Our analysis was motivated by problems in one-shot
learning, as discussed in Section 1. However, the results in this paper may also be relevant for
other applications and techniques related to high-dimensional data analysis, such as those involving
reproducing kernel Hilbert spaces. Furthermore, our analysis sheds new light on PCR, a long-studied
method for regression and prediction.
Many open questions remain. For instance, consider the semi-supervised setting, where additional
unlabeled data xn+1 , ..., xN is available, but the corresponding yi ?s are not provided. Then the
additional x-data could be used to obtain a better estimate of the first principal component u and
perhaps devise a method whose performance is closer to that of the oracle procedure y?or (indeed,
y?or may viewed as a semi-supervised procedure that utilizes an infinite amount of unlabeled data
to exactly identify u). Is bias-correction via inflation necessary in this setting? Presumably, biascorrection is not needed if N is large enough, but can this be made more precise? The simulations
described in the previous section indicate that y?bc outperforms the uncorrected PCR method y?pcr
in settings where twice as much labeled data is available for y?pcr . This suggests that role of biascorrection will remain significant in the semi-supervised setting, where additional unlabeled data
(which is less informative than labeled data) is available. Related questions involving transductive
learning [26, 27] may also be of interest for future research.
A potentially interesting extension of the present work involves multi-factor models. As opposed
to the single-factor model (1)-(2), one could consider a more general k-factor model, where yi =
hTi ? + ?i and xi = Shi + i ; here hi = (hi1 , ..., hik )T ? Rk is a multivariate normal
? random vector
T
k
(a k-dimensional factor linking yi and xi ), ? = (?1 , ..., ?k ) ? R , and S = d(?1 u1 ? ? ? ?k uk )
is a k ? d matrix, with ?1 , ..., ?k ? R and unit vectors u1 , ..., uk ? Rd . It may also be of interest
to work on relaxing the distributional (normality) assumptions made in this paper. Finally, we point
out that the results in this paper could potentially be used to develop flexible probit (latent variable)
models for one-shot classification problems.
References
[1] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 28:594?611, 2006.
8
[2] R. Salakhutdinov, J.B. Tenenbaum, and A. Torralba. One-shot learning with a hierarchical nonparametric
Bayesian model. JMLR Workshop and Conference Proceedings Volume 26: Unsupervised and Transfer
Learning Workshop, 27:195?206, 2012.
[3] M.C. Frank, N.D. Goodman, and J.B. Tenenbaum. A Bayesian framework for cross-situational wordlearning. Advances in Neural Information Processing Systems, 20:20?29, 2007.
[4] J.B. Tenenbaum, T.L. Griffiths, and C. Kemp. Theory-based Bayesian models of inductive learning and
reasoning. Trends in Cognitive Sciences, 10:309?318, 2006.
[5] C. Kemp, A. Perfors, and J.B. Tenenbaum. Learning overhypotheses with hierarchical Bayesian models.
Developmental Science, 10:307?321, 2007.
[6] S. Carey and E. Bartlett. Acquiring a single new word. Proceedings of the Stanford Child Language
Conference, 15:17?29, 1978.
[7] L.B. Smith, S.S. Jones, B. Landau, L. Gershkoff-Stowe, and L. Samuelson. Object name learning provides
on-the-job training for attention. Psychological Science, 13:13?19, 2002.
[8] F. Xu and J.B. Tenenbaum. Word learning as Bayesian inference. Psychological Review, 114:245?272,
2007.
[9] M. Fink. Object classification from a single example utilizing class relevance metrics. Advances in Neural
Information Processing Systems, 17:449?456, 2005.
[10] P. Hall, J.S. Marron, and A. Neeman. Geometric representation of high dimension, low sample size data.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67:427?444, 2005.
[11] P. Hall, Y. Pittelkow, and M. Ghosh. Theoretical measures of relative performance of classifiers for high
dimensional data with small sample sizes. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 70:159?173, 2008.
[12] Y.I. Ingster, C. Pouet, and A.B. Tsybakov. Classification of sparse high-dimensional vectors. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367:4427?
4448, 2009.
[13] W.F. Massy. Principal components regression in exploratory statistical research. Journal of the American
Statistical Association, 60:234?256, 1965.
[14] I.M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. Annals of
Statistics, 29:295?327, 2001.
[15] D. Paul. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Statistica
Sinica, 17:1617?1642, 2007.
[16] B. Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation
approach. Annals of Statistics, 36:2791?2817, 2008.
[17] I.M. Johnstone and A.Y. Lu. On consistency and sparsity for principal components analysis in high
dimensions. Journal of the American Statistical Association, 104:682?693, 2009.
[18] S. Jung and J.S. Marron. PCA consistency in high dimension, low sample size context. Annals of Statistics, 37:4104?4130, 2009.
[19] S. Lee, F. Zou, and F.A. Wright. Convergence and prediction of principal component scores in highdimensional settings. Annals of Statistics, 38:3605?3629, 2010.
[20] Q. Berthet and P. Rigollet. Optimal detection of sparse principal components in high dimension. arXiv
preprint arXiv:1202.5070, 2012.
[21] S. Jung, A. Sen, and J.S Marron. Boundary behavior in high dimension, low sample size asymptotics of
PCA. Journal of Multivariate Analysis, 109:190?203, 2012.
[22] Z. Ma. Sparse principal component analysis and iterative thresholding. Annals of Statistics, 41:772?801,
2013.
[23] C. Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In
Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, volume 1,
pages 197?206, 1955.
[24] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), 58:267?288, 1996.
[25] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference,
and Prediction. Springer, 2nd edition, 2009.
[26] V.N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[27] K.S. Azoury and M.K. Warmuth. Relative loss bounds for on-line density estimation with the exponential
family of distributions. Machine Learning, 43:211?246, 2001.
9
| 5137 |@word version:1 nd:1 open:1 d2:1 simulation:5 covariance:2 shot:27 moment:2 necessity:1 contains:1 score:3 series:3 neeman:1 bc:36 o2:1 existing:1 outperforms:3 contextual:3 comparing:2 subsequent:1 numerical:1 informative:2 enables:1 pursued:2 intelligence:1 warmuth:1 inspection:1 beginning:1 vanishing:1 smith:1 provides:4 mathematical:2 dn:1 along:1 direct:1 become:2 symposium:1 consists:1 indeed:4 behavior:1 roughly:1 multi:1 chi:3 salakhutdinov:1 landau:1 xti:1 becomes:3 provided:3 notation:1 bounded:1 what:1 kind:1 substantially:2 eigenvector:2 massy:1 ghosh:1 nj:1 berkeley:1 every:1 hnew:10 shed:2 fink:1 exactly:1 ensured:1 classifier:1 uk:4 unit:3 yn:3 eigenstructure:1 t1:1 before:1 positive:1 engineering:1 tends:1 id:1 subscript:1 noteworthy:2 ap:1 might:1 twice:2 studied:8 dut:1 suggests:7 relaxing:1 co:1 range:2 practice:1 procedure:2 asymptotics:2 empirical:18 word:11 griffith:1 suggest:1 close:1 unlabeled:3 selection:1 context:7 risk:14 dean:2 shi:1 attention:5 simplicity:1 identifying:1 immediately:1 estimator:21 insight:2 rule:7 utilizing:1 exploratory:1 annals:5 suppose:5 heavily:1 pa:1 trend:1 element:1 distributional:1 labeled:2 observed:2 role:1 preprint:1 ensures:1 vanishes:1 developmental:1 complexity:4 rigorously:1 ultimately:1 motivate:1 weakly:5 depend:1 upon:1 learner:1 easily:2 various:2 fast:1 describe:2 effective:2 perfors:1 outcome:6 quite:2 whose:2 widely:1 larger:2 apparent:3 say:2 heuristic:2 supplementary:3 stanford:1 ability:1 cov:1 statistic:6 transductive:1 jointly:1 eigenvalue:6 unprecedented:1 net:1 h22:9 sen:1 propose:3 relevant:5 achieve:1 convergence:1 diverges:1 inadmissibility:1 comparative:2 object:5 help:1 derive:1 develop:1 stat:1 ij:1 job:1 strong:2 uncorrected:1 involves:3 implies:3 indicate:1 correct:1 human:1 material:3 argued:1 require:1 proposition:15 extension:1 correction:2 inflation:1 considered:4 hall:3 normal:4 wright:1 great:1 presumably:1 mapping:1 predict:1 nadler:1 torralba:1 purpose:3 estimation:2 largest:4 reflects:1 clearly:4 hj:1 shrinkage:5 conjunction:1 corollary:3 derived:1 refining:1 methodological:1 referent:1 indicates:3 check:1 contrast:2 rigorous:1 inference:2 typically:3 perona:1 i1:1 interested:2 issue:1 classification:5 flexible:1 special:1 devoting:1 jones:1 unsupervised:1 future:2 simplify:1 few:2 primarily:1 recognize:1 friedman:1 freedom:3 detection:1 interest:2 mining:1 multiply:1 regressing:1 light:2 implication:2 accurate:2 closer:1 necessary:2 unless:2 ynew:8 iv:2 abundant:1 theoretical:25 psychological:2 instance:4 column:1 phrase:1 addressing:1 subset:2 predictor:1 successful:1 conducted:1 marron:4 density:1 lee:4 na:8 dhi:1 xuk:2 squared:3 opposed:2 hn:1 leveraged:1 cognitive:2 american:2 leading:1 summarized:2 includes:1 explicitly:1 depends:2 h1:2 analyze:1 sup:4 parallel:1 carey:1 contribution:1 oi:1 accuracy:1 who:2 correspond:1 yield:3 identify:1 samuelson:1 weak:8 bayesian:6 lu:1 multiplying:3 confirmed:1 explain:1 acquisition:1 associated:7 proof:3 proved:1 dataset:1 recall:2 subsection:1 lim:5 ut:9 knowledge:1 hilbert:1 appears:2 focusing:1 supervised:3 methodology:3 improved:2 furthermore:2 implicit:1 hand:9 perhaps:2 behaved:1 believe:1 facilitate:1 name:1 validity:1 contain:1 concept:1 remedy:1 inductive:1 illustrated:1 deal:1 illustrative:1 hik:1 l1:28 reasoning:1 image:2 novel:2 common:3 rigollet:1 empirically:1 physical:1 volume:2 linking:2 discussed:1 association:2 significant:1 rd:12 approx:1 consistency:16 similarly:1 language:2 multivariate:4 own:1 recent:2 inf:4 ebc:4 certain:1 binary:1 inconsistency:1 yi:13 devise:2 captured:1 additional:3 somewhat:1 ud:1 signal:3 ii:2 rv:24 semi:3 desirable:1 technical:1 calculation:2 cross:1 long:1 parenthesis:2 prediction:35 variant:3 regression:10 basic:3 dicker:1 vision:1 metric:2 rutgers:2 expectation:1 involving:2 arxiv:2 normalization:1 represent:1 kernel:1 achieved:1 addition:1 situational:1 goodman:1 inconsistent:6 effectiveness:1 intermediate:1 iii:2 enough:1 wn:13 undertake:1 variety:2 pennsylvania:1 restrict:2 lasso:1 hastie:1 expression:5 pca:5 motivated:1 bartlett:1 oneshot:2 cornerstone:1 generally:1 useful:1 eigenvectors:2 amount:1 nonparametric:1 dhu:1 tsybakov:1 extensively:1 tenenbaum:5 stein:1 category:2 exist:1 correctly:1 tibshirani:2 write:1 imations:1 key:1 salient:1 hi1:1 v1:1 vast:1 inverse:2 family:1 utilizes:1 bound:6 hi:7 xnew:13 oracle:10 fei:2 x2:8 u1:7 expanded:2 relatively:1 piscataway:1 smaller:4 remain:2 spiked:1 taken:1 ln:11 previously:1 discus:2 needed:1 serf:1 studying:1 available:6 multiplied:1 apply:1 observe:3 hierarchical:3 away:1 appropriate:1 ensure:3 hinge:1 especially:2 classical:6 society:4 objective:1 ingster:1 question:3 usual:1 unclear:1 simulated:3 kemp:2 length:1 ratio:2 sinica:1 potentially:4 frank:1 unknown:2 perform:2 observation:4 datasets:2 implementable:1 finite:2 situation:1 precise:2 y1:3 perturbation:1 reproducing:1 arbitrary:2 required:2 specified:5 sentence:1 philosophical:1 address:1 below:7 usually:1 ev:16 pattern:1 regime:13 sparsity:1 pcr:79 including:1 royal:4 natural:2 rely:1 warm:1 predicting:1 turning:1 normality:1 improve:1 imply:2 philadelphia:1 review:1 literature:4 l2:4 geometric:1 asymptotic:8 relative:16 probit:1 expect:1 highlight:1 loss:1 interesting:1 degree:3 consistent:15 article:3 foster:2 thresholding:1 jung:3 supported:1 bias:20 side:3 understand:1 johnstone:2 taking:1 sparse:3 yor:3 boundary:1 dimension:10 xn:4 valid:4 author:1 delet:1 made:2 berthet:1 far:3 transaction:2 approximate:1 compact:5 proximation:1 xt1:2 assumed:1 xi:14 fergus:1 alternatively:1 continuous:3 latent:4 iterative:1 table:6 additionally:1 tonoise:1 transfer:1 expanding:3 obtaining:1 unavailable:1 expansion:3 du:5 necessarily:1 complex:2 zou:1 significance:1 main:3 statistica:1 azoury:1 big:3 noise:3 paul:1 edition:1 child:1 x1:8 xu:3 referred:1 wiley:1 pv:3 explicit:2 exponential:1 pe:25 jmlr:1 third:3 hti:1 formula:5 theorem:21 rk:1 xt:8 workshop:2 vapnik:1 effectively:1 importance:1 magnitude:1 overhypotheses:1 easier:5 ordered:1 contained:1 scalar:5 acquiring:1 springer:1 relies:1 ma:1 conditional:1 viewed:2 consequently:2 careful:1 infinite:2 specifically:1 corrected:15 averaging:1 principal:19 lemma:1 secondary:1 biascorrection:2 highdimensional:2 h21:8 relevance:1 incorporate:1 phenomenon:2 |
4,573 | 5,138 | The Randomized Dependence Coefficient
David Lopez-Paz, Philipp Hennig, Bernhard Sch?olkopf
Max Planck Institute for Intelligent Systems
Spemannstra?e 38, T?ubingen, Germany
{dlopez,phennig,bs}@tue.mpg.de
Abstract
We introduce the Randomized Dependence Coefficient (RDC), a measure of nonlinear dependence between random variables of arbitrary dimension based on the
Hirschfeld-Gebelein-R?enyi Maximum Correlation Coefficient. RDC is defined in
terms of correlation of random non-linear copula projections; it is invariant with
respect to marginal distribution transformations, has low computational cost and
is easy to implement: just five lines of R code, included at the end of the paper.
1
Introduction
Measuring statistical dependence between random variables is a fundamental problem in statistics.
Commonly used measures of dependence, Pearson?s rho, Spearman?s rank or Kendall?s tau are computationally efficient and theoretically well understood, but consider only a limited class of association patterns, like linear or monotonically increasing functions. The development of non-linear
dependence measures is challenging because of the radically larger amount of possible association
patterns.
Despite these difficulties, many non-linear statistical dependence measures have been developed
recently. Examples include the Alternating Conditional Expectations or backfitting algorithm (ACE)
[2, 9], Kernel Canonical Correlation Analysis (KCCA) [1], (Copula) Hilbert-Schmidt Independence
Criterion (CHSIC, HSIC) [6, 5, 15], Distance or Brownian Correlation (dCor) [24, 23] and the
Maximal Information Coefficient (MIC) [18]. However, these methods exhibit high computational
demands (at least quadratic costs in the number of samples for KCCA, HSIC, CHSIC, dCor or
MIC), are limited to measuring dependencies between scalar random variables (ACE, MIC) or can
be difficult to implement (ACE, MIC).
This paper develops the Randomized Dependence Coefficient (RDC), an estimator of the HirschfeldGebelein-R?enyi Maximum Correlation Coefficient (HGR) addressing the issues listed above. RDC
defines dependence between two random variables as the largest canonical correlation between random non-linear projections of their respective empirical copula-transformations. RDC is invariant
to monotonically increasing transformations, operates on random variables of arbitrary dimension,
and has computational cost of O(n log n) with respect to the sample size. Moreover, it is easy to
implement: just five lines of R code, included in Appendix A.
The following Section reviews the classic work of Alfr?ed R?enyi [17], who proposed seven desirable
fundamental properties of dependence measures, proved to be satisfied by the Hirschfeld-GebeleinR?enyi?s Maximum Correlation Coefficient (HGR). Section 3 introduces the Randomized Dependence Coefficient as an estimator designed in the spirit of HGR, since HGR itself is computationally
intractable. Properties of RDC and its relationship to other non-linear dependence measures are
analysed in Section 4. Section 5 validates the empirical performance of RDC on a series of numerical experiments on both synthetic and real-world data.
1
2
Hirschfeld-Gebelein-R?enyi?s Maximum Correlation Coefficient
In 1959 [17], Alfr?ed R?enyi argued that a measure of dependence ?? : X ? Y ? [0, 1] between
random variables X ? X and Y ? Y should satisfy seven fundamental properties:
1.
2.
3.
4.
5.
6.
7.
?? (X, Y ) is defined for any pair of non-constant random variables X and Y .
?? (X, Y ) = ?? (Y, X)
0 ? ?? (X, Y ) ? 1
?? (X, Y ) = 0 iff X and Y are statistically independent.
For bijective Borel-measurable functions f, g : R ? R, ?? (X, Y ) = ?? (f (X), g(Y )).
?? (X, Y ) = 1 if for Borel-measurable functions f or g, Y = f (X) or X = g(Y ).
If (X, Y ) ? N (?, ?), then ?? (X, Y ) = |?(X, Y )|, where ? is the correlation coefficient.
R?enyi also showed the Hirschfeld-Gebelein-R?enyi Maximum Correlation Coefficient (HGR) [3, 17]
to satisfy all these properties. HGR was defined by Gebelein in 1941 [3] as the supremum of Pearson?s correlation coefficient ? over all Borel-measurable functions f, g of finite variance:
hgr(X, Y ) = sup ?(f (X), g(Y )),
(1)
f,g
Since the supremum in (1) is over an infinite-dimensional space, HGR is not computable. It is
an abstract concept, not a practical dependence measure. In the following we propose a scalable
estimator with the same structure as HGR: the Randomized Dependence Coefficient.
3
Randomized Dependence Coefficient
P (x) ? U[0, 1]
P (y)
?
?
?
?
?
?
?
CCA
??1
P (x)
? T ?(P (y))
?
?
?
?
?
?
?
?
?T ?(P (x))
?(wi P (x) + bi )
P (x)
?
?
?
?
?
?
?
? T ?(P (y))
x
??0
?
?
?
?
?
?
?
?
?(mi P (y) + li )
y
??0
P (y) ? U[0, 1]
The Randomized Dependence Coefficient (RDC) measures the dependence between random samples
X ? Rp?n and Y ? Rq?n as the largest canonical correlation between k randomly chosen nonlinear projections of their copula transformations. Before Section 3.4 defines this concept formally,
we describe the three necessary steps to construct the RDC statistic: copula-transformation of each
of the two random samples (Section 3.1), projection of the copulas through k randomly chosen nonlinear maps (Section 3.2) and computation of the largest canonical correlation between the two sets
of non-linear random projections (Section 3.3). Figure 1 offers a sketch of this process.
?T ?(P (x))
P (y)
Figure 1: RDC computation for a simple set of samples {(xi , yi )}100
i=1 drawn from a noisy circular
pattern: The samples are used to estimate the copula, then mapped with randomly drawn non-linear
functions. The RDC is the largest canonical correlation between these non-linear projections.
3.1
Estimation of Copula-Transformations
To achieve invariance with respect to transformations on marginal distributions (such as shifts or
rescalings), we operate on the empirical copula transformation of the data [14, 15]. Consider a random vector X = (X1 , . . . , Xd ) with continuous marginal cumulative distribution functions (cdfs)
Pi , 1 ? i ? d. Then the vector U = (U1 , . . . , Ud ) := P (X) = (P1 (X1 ), . . . , Pd (Xd )), known as
the copula transformation, has uniform marginals:
2
Theorem 1. (Probability Integral Transform [14]) For a random variable X with cdf P , the random
variable U := P (X) is uniformly distributed on [0, 1].
The random variables U1 , . . . , Ud are known as the observation ranks of X1 , . . . , Xd . Crucially,
U preserves the dependence structure of the original random vector X, but ignores each of its d
marginal forms [14]. The joint distribution of U is known as the copula of X:
Theorem 2. (Sklar [20]) Let the random vector X = (X1 , . . . , Xd ) have continuous marginal cdfs
Pi , 1 ? i ? d. Then, the joint cumulative distribution of X is uniquely expressed as:
P (X1 , . . . , Xd ) = C(P1 (X1 ), . . . , Pd (Xd )),
(2)
where the distribution C is known as the copula of X.
A practical estimator of the univariate cdfs P1 , . . . , Pd is the empirical cdf :
n
Pn (x) :=
1X
I(Xi ? x),
n i=1
(3)
which gives rise to the empirical copula transformations of a multivariate sample:
Pn (x) = [Pn,1 (x1 ), . . . , Pn,d (xd )].
(4)
The Massart-Dvoretzky-Kiefer-Wolfowitz inequality [13] can be used to show that empirical copula
transformations converge fast to the true transformation as the sample size increases:
Theorem 3. (Convergence of the empirical copula, [15, Lemma 7]) Let X1 , . . . , Xn be an i.i.d.
sample from a probability distribution over Rd with marginal cdf?s P1 , . . . , Pd . Let P (X) be the
copula transformation and Pn (X) the empirical copula transformation. Then, for any > 0:
2n2
.
(5)
Pr sup kP (x) ? Pn (x)k2 > ? 2d exp ?
d
x?Rd
Computing Pn (X) involves sorting the marginals of X ? Rd?n , thus O(dn log(n)) operations.
3.2
Generation of Random Non-Linear Projections
The second step of the RDC computation is to augment the empirical copula transformations with
non-linear projections, so that linear methods can subsequently be used to capture non-linear dependencies on the original data. This is a classic idea also used in other areas, particularly in regression.
In an elegant result, Rahimi and Recht [16] proved that linear regression on random, non-linear
projections of the original feature space can generate high-performance regressors:
Theorem
on ? and |?(x; w)| ? 1. Let F =
R4. (Rahimi-Recht) Let p be a distribution
f (x) = ? ?(w)?(x; w)dw |?(w)| ? Cp(w) . Draw w1 , . . . , wk iid from p. Further let
? > 0, and c be some L-Lipschitz loss function, and consider data {xi , yi }ni=1 drawn iid from some
Pk
arbitrary P (X, Y ). The ?1 , . . . , ?k for which fk (x) = i=1 ?i ?(x; wi ) minimizes the empirical
risk c(fk (x), y) has a distance from the c-optimal estimator in F bounded by
!
r
1
1
1
? +?
LC log
(6)
EP [c(fk (x), y)] ? min EP [c(f (x), y)] ? O
f ?F
?
n
k
with probability at least 1 ? 2?.
Intuitively, Theorem 4 states that randomly selecting wi in
them causes only bounded error.
Pk
i=1
?i ?(x; wi ) instead of optimising
The choice of the non-linearities ? : R ? R is the main and unavoidable assumption in RDC.
This choice is a well-known problem common to all non-linear regression methods and has been
studied extensively in the theory of regression as the selection of reproducing kernel Hilbert space
[19, ?3.13]. The only way to favour one such family and distribution over another is to use prior
assumptions about which kind of distributions the method will typically have to analyse.
3
We use random features instead of the Nystr?om method because of their smaller memory and computation requirements [11]. In our experiments, we will use sinusoidal projections, ?(wT x + b) :=
sin(wT x + b). Arguments favouring this choice are that shift-invariant kernels are approximated
with these features when using the appropriate random parameter sampling distribution [16], [4,
p. 208] [22, p. 24], and that functions
with absolutely integrable Fourier transforms are approxi?
mated with L2 error below O(1/ k) by k of these features [10].
Let the random parameters wi ? N (0, sI), bi ? N (0, s). Choosing wi to be Normal is analogous
to the use of the Gaussian kernel for HSIC, CHSIC or KCCA [16]. Tuning s is analogous to selecting
the kernel width, that is, to regularize the non-linearity of the random projections.
Given a data collection X = (x1 , . . . , xn ), we will denote by
?
?T
?(w1T x1 + b1 ) ? ? ? ?(wkT x1 + bk )
?
?
..
..
..
?(X; k, s) := ?
?
.
.
.
?(w1T xn + b1 ) ? ? ? ?(wkT xn + bk )
(7)
the k?th order random non-linear projection from X ? Rd?n to ?(X; k, s) ? Rk?n . The computational complexity of computing ?(X; k, s) with naive matrix multiplications is O(kdn). However, recent techniques using fast Walsh-Hadamard transforms [11] allow computing these feature
expansions within a computational cost of O(k log(d)n) and O(k) storage.
3.3
Computation of Canonical Correlations
The final step of RDC is to compute the linear combinations of the augmented empirical copula
transformations that have maximal correlation. Canonical Correlation Analysis (CCA, [7]) is the
calculation of pairs of basis vectors (?, ?) such that the projections ?T X and ? T Y of two random samples X ? Rp?n and Y ? Rq?n are maximally correlated. The correlations between the
projected (or canonical) random samples are referred to as canonical correlations. There exist up to
max(rank(X), rank(Y )) of them. Canonical correlations ?2 are the solutions to the eigenproblem:
?1
Cxy
0
Cxx
?
?
2
=
?
,
(8)
?1
?
?
Cyy
Cyx
0
where Cxy = cov(X, Y ) and the matrices Cxx and Cyy are assumed to be invertible. Therefore,
the largest canonical correlation ?1 between X and Y is the supremum of the correlation coefficients
over their linear projections, that is: ?1 (X, Y ) = sup?,? ?(?T X, ? T Y ).
When p, q n, the cost of CCA is dominated by the estimation of the matrices Cxx , Cyy and Cxy ,
hence being O((p + q)2 n) for two random variables of dimensions p and q, respectively.
3.4
Formal Definition or RDC
Given the random samples X ? Rp?n and Y ? Rq?n and the parameters k ? N+ and s ? R+ , the
Randomized Dependence Coefficient between X and Y is defined as:
rdc(X, Y ; k, s) := sup ? ?T ?(P (X); k, s), ? T ?(P (Y ); k, s) .
(9)
?,?
4
Properties of RDC
Computational complexity: In the typical setup (very large n, large p and q, small k) the computational complexity of RDC is dominated by the calculation of the copula-transformations. Hence,
we achieve a cost in terms of the sample size of O((p+q)n log n+kn log(pq)+k 2 n) ? O(n log n).
Ease of implementation: An implementation of RDC in R is included in the Appendix A.
Relationship to the HGR coefficient: It is tempting to wonder whether RDC is a consistent, or
even an efficient estimator of the HGR coefficient. However, a simple experiment shows that it is not
desirable to approximate HGR exactly on finite datasets: Consider p(X, Y ) = N (x; 0, 1)N (y; 0, 1)
4
which is independent, thus, by both R?enyi?s 4th and 7th properties, has hgr(X, Y ) = 0. However, for finitely many N samples from p(X, Y ), almost surely, values in both X and Y are
pairwise different and separated by a finite difference. So there exist continuous (thus Borel
measurable) functions f (X) and g(Y ) mapping both X and Y to the sorting ranks of Y , i.e.
f (xi ) = g(yi ) ?(xi , yi ) ? (X, Y ). Therefore, the finite-sample version of Equation (1) is constant and equal to ?1? for continuous random variables. Meaningful measures of dependence from
finite samples thus must rely on some form of regularization. RDC achieves this by approximating
the space of Borel measurable functions with the restricted function class F from Theorem 4:
Assume the optimal transformations f and g (Equation 1) to belong to the Reproducing Kernel
Hilbert Space F (Theorem 4), with associated shift-invariant, positive semi-definite kernel function
k(x, x0 ) = h?(x), ?(x0 )iF ? 1. Then, with probability greater than 1 ? 2?:
!
r
kmkF
LC
1
?
hgr(X, Y ; F) ? rdc(X, Y ; k) = O
+?
log
,
(10)
?
n
k
where m := ??T + ?? T and n, k denote the sample ?
size and number of random features. The
bound (10) is the sum of two errors. The error O(1/ n) is due to the convergence of CCA?s
largest eigenvalue in the finite sample size regime. This result [8, Theorem 6] is originally obtained by posing CCA as a least squares?regression on the product space induced by the feature map
?(x, y) = [?(x)?(x)T , ?(y)?(y)T ,? 2?(x)?(y)T ]T . Because of approximating ? with k random features, an additional error O(1/ k) is introduced in the least squares regression [16, Lemma
3]. Therefore, an equivalence between RDC and KCCA is established if RDC uses an infinite number of sinusoidal features, the random sampling distribution is set to the inverse Fourier transform
of the shift-invariant kernel used by KCCA and the copula-transformations are discarded. However,
when k ? n regularization is needed to avoid spurious perfect correlations, as discussed above.
Relationship to other estimators: Table 1 summarizes several state-of-the-art dependence measures showing, for each measure, whether it allows for general non-linear dependence estimation,
handles multidimensional random variables, is invariant with respect to changes in marginal distributions, returns a statistic in [0, 1], satisfy R?enyi?s properties (Section 2), and how many parameters
it requires. As parameters, we here count the kernel function for kernel methods, the basis function
and number of random features for RDC, the stopping tolerance for ACE and the search-grid size for
MIC, respectively. Finally, the table lists computational complexities with respect to sample size.
When using random features ? linear for some neighbourhood around zero (like sinusoids or sigmoids), RDC converges to Spearman?s rank correlation coefficient as s ? 0, for any k.
Table 1: Comparison between non-linear dependence measures.
Name of
Coeff.
Pearson?s ?
Spearman?s ?
Kendall?s ?
CCA
KCCA [1]
ACE [2]
MIC [18]
dCor [24]
HSIC [5]
CHSIC [15]
RDC
NonLinear
?
?
?
?
X
X
X
X
X
X
X
Vector
Inputs
?
?
?
X
X
?
?
X
X
X
X
Marginal
Invariant
?
X
X
?
?
?
?
?
?
X
X
Renyi?s
Properties
?
?
?
?
?
X
?
?
?
?
X
Coeff.
? [0, 1]
X
X
X
X
X
X
X
X
?
?
X
# Par.
0
0
0
0
1
1
1
1
1
1
2
Comp.
Cost
n
n log n
n log n
n
n3
n
n1.2
n2
n2
n2
n log n
Testing for independence with RDC: Consider the hypothesis ?the two sets of non-linear projections are mutually uncorrelated?. Under normality
(or large sample sizes), Bartlett?s ap assumptions
Qk
2
2
proximation [12] can be used to show 2k+3
?
n
log
(1??
i ) ? ?k2 , where ?1 , . . . , ?k are the
i=1
2
5
canonical correlations between ?(P (X); k, s) and ?(P (Y ); k, s). Alternatively, non-parametric
asymptotic distributions can be obtained from the spectrum of the inner products of the non-linear
random projection matrices [25, Theorem 3].
5
Experimental Results
We performed experiments on both synthetic and real-world data to validate the empirical performance of RDC versus the non-linear dependence measures listed in Table 1. In some experiments
we do not compare against to KCCA because we were unable to find a good set of hyperparameters.
Parameter selection: For RDC, the number of random features is set to k = 20 for both random
samples, since no significant improvements were observed for larger values. The random feature
sampling parameter s is more crucial, and set as follows: when the marginals
of u are standard
uniforms, w ? N (0, sI) and b ? N (0, s), then V[wT u + b] = s 1 + d3 ; therefore, we opt to set
1
s to a linear scaling of the input variable dimensionality. In all our experiments s = 6d
worked well.
The development of better methods to set the parameters of RDC is left as future work.
HSIC and CHSIC use Gaussian kernels k(z, z 0 ) = exp(??kz ? z 0 k22 ) with ? ?1 set to the euclidean
distance median of each sample [5]. MIC?s search-grid size is set to B(n) = n0.6 as recommended
by the authors [18], although speed improvements are achieved when using lower values. ACE?s
tolerance is set to = 0.01, default value in the R package acepack.
5.1
Synthetic Data
Resistance to additive noise: We define the power of a dependence measure as its ability to
discern between dependent and independent samples that share equal marginal forms. In the spirit
of Simon and Tibshirani1 , we conducted experiments to estimate the power of RDC as a measure
of non-linear dependence. We chose 8 bivariate association patterns, depicted inside little boxes in
Figure 3. For each of the 8 association patterns, 500 repetitions of 500 samples were generated,
in which the input sample was uniformly distributed on the unit interval. Next, we regenerated
the input sample randomly, to generate independent versions of each sample with equal marginals.
Figure 3 shows the power for the discussed non-linear dependence measures as the variance of some
zero-mean Gaussian additive noise increases from 1/30 to 3. RDC shows worse performance in
the linear association pattern due to overfitting and in the step-function due to the smoothness prior
induced by the sinusoidal features, but has good performance in non-functional association patterns.
Running times: Table 2 shows running times for the considered non-linear dependence measures
on scalar, uniformly distributed, independent samples of sizes {103 , . . . , 106 } when averaging over
100 runs. Single runs above ten minutes were cancelled. Pearson?s ?, ACE, dCor, KCCA and MIC
are implemented in C, while RDC, HSIC and CHSIC are implemented as interpreted R code. KCCA
is approximated using incomplete Cholesky decompositions as described in [1].
Table 2: Average running times (in seconds) for dependence measures on versus sample sizes.
sample size
1,000
10,000
100,000
1,000,000
Pearson?s ?
0.0001
0.0002
0.0071
0.0914
RDC
0.0047
0.0557
0.3991
4.6253
ACE
0.0080
0.0782
0.5101
5.3830
KCCA
0.402
3.247
43.801
?
dCor
0.3417
59.587
?
?
HSIC
0.3103
27.630
?
?
CHSIC
0.3501
29.522
?
?
MIC
1.0983
?
?
?
Value of statistic in [0, 1]: Figure 4 shows RDC, ACE, dCor, MIC, Pearson?s ?, Spearman?s rank
and Kendall?s ? dependence estimates for 14 different associations of two scalar random samples.
RDC scores values close to one on all the proposed dependent associations, whilst scoring values
close to zero for the independent association, depicted last. When the associations are Gaussian (first
row), RDC scores values close to the Pearson?s correlation coefficient (Section 2, 7th property).
1
http://www-stat.stanford.edu/?tibs/reshef/comment.pdf
6
5.2
Feature Selection in Real-World Data
We performed greedy feature selection via dependence maximization [21] on eight real-world
datasets. More specifically, we attempted to construct the subset of features G ? X that minimizes the normalized mean squared regression error (NMSE) of a Gaussian process regressor. We
do so by selecting the feature x(i) maximizing dependence between the feature set Gi = {Gi?1 , x(i) }
and the target variable y at each iteration i ? {1, . . . 10}, such that G0 = {?} and x(i) ?
/ Gi?1 .
We considered 12 heterogeneous datasets, obtained from the UCI dataset repository2 , the Gaussian process web site Data3 and the Machine Learning data set repository4 . Random training/test
partitions are computed to be disjoint and equal sized.
Since G can be multi-dimensional, we compare RDC to the non-linear methods dCor, HSIC and
CHSIC. Given their quadratic computational demands, dCor, HSIC and CHSIC use up to 1, 000
points when measuring dependence; this constraint only applied on the sarcos and abalone
datasets. Results are averages of 20 random training/test partitions.
3
4
5
6
7
breast
0.35
0.25
2
4
6
8
10
1
2
cpuact
3
4
5
6
7
2
crime
4
6
8
10
8
10
housing
0.35
0.30
0.20
0.3
4
5
6
7
8
2
4
6
8
10
2
parkinson
6
8
10
4
6
whitewine
0.3
0.35
2
sarcos
0.5
0.45
0.40
0.980
0.1
0.965
4
0.80
3
insurance
0.70
2
0.7
1
0.25
0.5
0.40
0.7
0.50
0.9
calhousing
dCor
HSIC
CHSIC
RDC
0.45
0.35
autompg
0.15
0.60
0.54
0.48
2
0.30
Gaussian Process NMSE
1
automobile
0.36 0.40 0.44 0.48
abalone
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
Number of features
Figure 2: Feature selection experiments on real-world datasets.
Figure 2 summarizes the results for all datasets and algorithms as the number of selected features
increases. RDC performs best in most datasets, with much lower running time than its contenders.
6
Conclusion
We have presented the randomized dependence coefficient, a lightweight non-linear measure of
dependence between multivariate random samples. Constructed as a finite-dimensional estimator in
the spirit of the Hirschfeld-Gebelein-R?enyi maximum correlation coefficient, RDC performs well
empirically, is scalable to very large datasets, and is easy to adapt to concrete problems.
We thank fruitful discussions with Alberto Su?arez, Theofanis Karaletsos and David Reshef.
2
http://www.ics.uci.edu/?mlearn
http://www.gaussianprocess.org/gpml/data/
4
http://www.mldata.org
3
7
power.cor[typ, ]
0.4
0.8
power.cor[typ, ]
power.cor[typ, ]
0.0
0.8
power.cor[typ, ] Power power.cor[typ, ]
xvals
power.cor[typ, ]
0.8
0.0
0.4
xvals
cor
dCor
MIC
ACE
HSIC
CHSIC
RDC
xvals
power.cor[typ, ]
0.8
power.cor[typ, ]
0.0
0.4
xvals
xvals
0.0
0.4
xvals
0
20
40
60
80
100 0
xvals
20
40
60
Noise Level
80
100
xvals
Figure 3: Power of discussed measures on several bivariate association patterns as noise increases.
Insets show the noise-free form of each association pattern.
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.7
0.8
0.5
0.6
0.4
0.4
0.4
0.4
0.4
0.2
0.3
0.1
0.1
0.0
0.1
0.0
0.1
0.0
1.0
1.0
0.0
0.4
0.0
1.0
0.0
0.3
0.3
0.0
0.1
0.0
0.2
-0.0
0.5
0.5
0.0
0.1
0.0
0.2
0.0
1.0
1.0
0.0
0.5
0.0
0.9
0.0
0.4
1.0
0.4
-0.4
0.4
-0.4
0.2
-0.3
0.8
0.8
-0.8
0.7
-0.8
0.5
-0.6
1.0
1.0
-1.0
1.0
-1.0
1.0
-1.0
1.0
0.1
0.3
0.1
0.6
0.1
1.0
1.0
-0.0
0.2
-0.0
0.6
-0.0
0.1
0.1
-0.0
0.0
-0.0
0.1
-0.0
Figure 4: RDC, ACE, dCor, MIC, Pearson?s ?, Spearman?s rank and Kendall?s ? estimates (numbers
in tables above plots, in that order) for several bivariate association patterns.
A
R Source Code
rdc <- function(x,y,k=20,s=1/6,f=sin) {
x <- cbind(apply(as.matrix(x),2,function(u)rank(u)/length(u)),1)
y <- cbind(apply(as.matrix(y),2,function(u)rank(u)/length(u)),1)
x <- s/ncol(x)*x%*%matrix(rnorm(ncol(x)*k),ncol(x))
y <- s/ncol(y)*y%*%matrix(rnorm(ncol(y)*k),ncol(y))
cancor(cbind(f(x),1),cbind(f(y),1))$cor[1]
}
8
References
[1] F. R. Bach and M. I. Jordan. Kernel independent component analysis. JMLR, 3:1?48, 2002.
[2] L. Breiman and J. H. Friedman. Estimating Optimal Transformations for Multiple Regression
and Correlation. Journal of the American Statistical Association, 80(391):580?598, 1985.
[3] H. Gebelein. Das statistische Problem der Korrelation als Variations- und Eigenwertproblem
und sein Zusammenhang mit der Ausgleichsrechnung. Zeitschrift f?ur Angewandte Mathematik
und Mechanik, 21(6):364?379, 1941.
[4] I.I. Gihman and A.V. Skorohod. The Theory of Stochastic Processes, volume 1. Springer,
1974s.
[5] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?olkopf, and A. Smola. A kernel two-sample
test. JMLR, 13:723?773, 2012.
[6] A. Gretton, O. Bousquet, A. Smola, and B. Sch?olkopf. Measuring statistical dependence with
Hilbert-Schmidt norms. In Proceedings of the 16th international conference on Algorithmic
Learning Theory, pages 63?77. Springer-Verlag, 2005.
[7] W. K. H?ardle and L. Simar. Applied Multivariate Statistical Analysis. Springer, 2nd edition,
2007.
[8] D. Hardoon and J. Shawe-Taylor. Convergence analysis of kernel canonical correlation analysis: theory and practice. Machine Learning, 74(1):23?38, 2009.
[9] T. Hastie and R. Tibshirani. Generalized additive models. Statistical Science, 1:297?310, 1986.
[10] L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates
for projection pursuit regression and neural network training. Annals of Statistics, 20(1):608?
613, 1992.
[11] Q. Le, T. Sarlos, and A. Smola. Fastfood ? Approximating kernel expansions in loglinear time.
In ICML, 2013.
[12] K. V. Mardia, J. T. Kent, and J. M. Bibby. Multivariate Analysis. Academic Press, 1979.
[13] P. Massart. The tight constant in the Dvoretzky-Kiefer-wolfowitz inequality. The Annals of
Probability, 18(3), 1990.
[14] R. Nelsen. An Introduction to Copulas. Springer Series in Statistics, 2nd edition, 2006.
[15] B. Poczos, Z. Ghahramani, and J. Schneider. Copula-based kernel dependency measures. In
ICML, 2012.
[16] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization
with randomization in learning. NIPS, 2008.
[17] A. R?enyi. On measures of dependence. Acta Mathematica Academiae Scientiarum Hungaricae, 10:441?451, 1959.
[18] D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh,
E. S. Lander, M. Mitzenmacher, and P. C. Sabeti. Detecting novel associations in large data
sets. Science, 334(6062):1518?1524, 2011.
[19] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press, 2002.
[20] A. Sklar. Fonctions de repartition a` n dimension set leurs marges. Publ. Inst. Statis. Univ.
Paris, 8(1):229?231, 1959.
[21] L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt. Feature selection via dependence
maximization. JMLR, 13:1393?1434, June 2012.
[22] M.L. Stein. Interpolation of Spatial Data. Springer, 1999.
[23] G. J. Sz?ekely and M. L. Rizzo. Rejoinder: Brownian distance covariance. Annals of Applied
Statistics, 3(4):1303?1308, 2009.
[24] G. J. Sz?ekely, M. L. Rizzo, and N. K. Bakirov. Measuring and testing dependence by correlation of distances. Annals of Statistics, 35(6), 2007.
[25] K. Zhang, J. Peters, D. Janzing, and B.Sch?olkopf. Kernel-based conditional independence test
and application in causal discovery. CoRR, abs/1202.3775, 2012.
9
| 5138 |@word version:2 reshef:4 norm:1 nd:2 crucially:1 decomposition:1 kent:1 covariance:1 nystr:1 series:2 score:2 selecting:3 lightweight:1 favouring:1 analysed:1 si:2 must:1 additive:3 numerical:1 partition:2 designed:1 plot:1 statis:1 n0:1 greedy:2 selected:1 sarcos:2 detecting:1 philipp:1 org:2 zhang:1 five:2 dn:1 constructed:1 lopez:1 backfitting:1 inside:1 introduce:1 pairwise:1 x0:2 theoretically:1 mpg:1 p1:4 multi:1 little:1 increasing:2 hardoon:1 estimating:1 moreover:1 bounded:2 linearity:2 kind:1 interpreted:1 minimizes:2 developed:1 whilst:1 transformation:20 multidimensional:1 xd:7 exactly:1 bedo:1 k2:2 unit:1 planck:1 before:1 positive:1 understood:1 zeitschrift:1 despite:1 interpolation:1 ap:1 chose:1 acta:1 studied:1 r4:1 equivalence:1 challenging:1 ease:1 limited:2 walsh:1 cdfs:3 bi:2 statistically:1 practical:2 testing:2 practice:1 implement:3 definite:1 area:1 empirical:12 projection:17 close:3 selection:6 storage:1 risk:1 www:4 measurable:5 map:2 fruitful:1 sarlos:1 maximizing:1 estimator:8 regularize:1 dw:1 classic:2 handle:1 variation:1 hsic:11 analogous:2 annals:4 target:1 us:1 hypothesis:1 mic:12 approximated:2 particularly:1 ep:2 observed:1 tib:1 capture:1 rq:3 pd:4 phennig:1 complexity:4 cyx:1 und:3 tight:1 hirschfeld:5 basis:2 kmkf:1 sink:1 joint:2 arez:1 sklar:2 separated:1 enyi:12 fast:2 describe:1 mechanik:1 univ:1 kp:1 pearson:8 choosing:1 ace:11 larger:2 rho:1 stanford:1 ability:1 statistic:8 cov:1 gi:3 transform:2 itself:1 validates:1 noisy:1 analyse:1 final:1 housing:1 eigenvalue:1 propose:1 maximal:2 product:2 uci:2 hadamard:1 calhousing:1 iff:1 achieve:2 validate:1 olkopf:5 convergence:4 requirement:1 nelsen:1 perfect:1 converges:1 stat:1 finitely:1 implemented:2 involves:1 rasch:1 subsequently:1 stochastic:1 argued:1 randomization:1 opt:1 ardle:1 around:1 considered:2 ic:1 normal:1 exp:2 mapping:1 algorithmic:1 achieves:1 estimation:3 eigenwertproblem:1 gaussianprocess:1 largest:6 repetition:1 weighted:1 minimization:1 mit:2 gaussian:7 pn:7 avoid:1 parkinson:1 breiman:1 gpml:1 karaletsos:1 june:1 improvement:2 rank:10 inst:1 dependent:2 stopping:1 typically:1 spurious:1 germany:1 issue:1 augment:1 development:2 art:1 spatial:1 copula:23 marginal:9 equal:4 construct:2 eigenproblem:1 sampling:3 optimising:1 jones:1 icml:2 future:1 regenerated:1 intelligent:1 develops:1 randomly:5 preserve:1 kitchen:1 n1:1 friedman:1 ab:1 circular:1 insurance:1 introduces:1 integral:1 necessary:1 respective:1 spemannstra:1 incomplete:1 euclidean:1 taylor:1 causal:1 bibby:1 measuring:5 maximization:2 cost:7 addressing:1 subset:1 xvals:8 leurs:1 uniform:2 wonder:1 paz:1 conducted:1 dependency:3 kn:1 synthetic:3 contender:1 recht:3 borgwardt:2 fundamental:3 randomized:9 international:1 invertible:1 regressor:1 concrete:1 w1:1 squared:1 satisfied:1 unavoidable:1 worse:1 american:1 return:1 grossman:1 li:1 sinusoidal:3 de:2 wk:1 coefficient:23 satisfy:3 performed:2 kendall:4 sup:4 simon:1 typ:8 om:1 square:2 ni:1 kiefer:2 cxy:3 variance:2 who:1 qk:1 repository4:1 iid:2 comp:1 mlearn:1 turnbaugh:1 janzing:1 ed:2 definition:1 against:1 mathematica:1 associated:1 mi:1 proved:2 dataset:1 alfr:2 dimensionality:1 hilbert:5 dvoretzky:2 originally:1 maximally:1 mitzenmacher:1 box:1 sabeti:1 just:2 smola:5 correlation:30 sketch:1 web:1 replacing:1 su:1 nonlinear:4 scientiarum:1 ekely:2 defines:2 cyy:3 name:1 k22:1 concept:2 true:1 normalized:1 hence:2 regularization:2 sinusoid:1 alternating:1 gebelein:6 sin:2 width:1 uniquely:1 abalone:2 criterion:1 generalized:1 pdf:1 bijective:1 performs:2 cp:1 novel:1 recently:1 common:1 functional:1 empirically:1 volume:1 association:15 belong:1 discussed:3 sein:1 marginals:4 significant:1 fonctions:1 smoothness:1 rd:4 tuning:1 fk:3 dcor:11 grid:2 shawe:1 pq:1 brownian:2 multivariate:4 showed:1 recent:1 verlag:1 ubingen:1 inequality:2 yi:4 der:2 scoring:1 integrable:1 greater:1 additional:1 schneider:1 surely:1 converge:1 wolfowitz:2 ud:2 recommended:1 monotonically:2 rdc:45 tempting:1 semi:1 desirable:2 multiple:1 gretton:3 rahimi:3 adapt:1 calculation:2 offer:1 bach:1 academic:1 alberto:1 scalable:2 kcca:10 regression:9 heterogeneous:1 breast:1 expectation:1 iteration:1 kernel:18 achieved:1 interval:1 lander:1 median:1 source:1 rescalings:1 crucial:1 sch:5 operate:1 massart:2 wkt:2 induced:2 comment:1 elegant:1 rizzo:2 spirit:3 jordan:1 hgr:14 easy:3 independence:3 hastie:1 korrelation:1 inner:1 idea:1 computable:1 shift:4 favour:1 whether:2 bartlett:1 song:1 peter:1 resistance:1 poczos:1 cause:1 zusammenhang:1 listed:2 amount:1 transforms:2 stein:1 extensively:1 ten:1 generate:2 http:4 exist:2 canonical:13 disjoint:1 tibshirani:1 hennig:1 data3:1 drawn:3 d3:1 sum:2 run:2 inverse:1 package:1 discern:1 family:1 almost:1 draw:1 appendix:2 cxx:3 summarizes:2 coeff:2 scaling:1 cca:6 bound:1 bakirov:1 quadratic:2 constraint:1 worked:1 n3:1 dominated:2 bousquet:1 u1:2 fourier:2 argument:1 min:1 mated:1 speed:1 ausgleichsrechnung:1 combination:1 spearman:5 smaller:1 finucane:1 ur:1 wi:6 b:1 intuitively:1 invariant:7 pr:1 restricted:1 computationally:2 equation:2 mutually:1 mathematik:1 count:1 needed:1 end:1 cor:10 pursuit:1 operation:1 eight:1 apply:2 appropriate:1 cancelled:1 neighbourhood:1 schmidt:2 rp:3 original:3 running:4 include:1 ghahramani:1 approximating:3 g0:1 parametric:1 dependence:40 loglinear:1 exhibit:1 skorohod:1 distance:5 unable:1 mapped:1 thank:1 tue:1 seven:2 code:4 length:2 relationship:3 difficult:1 setup:1 rise:1 kdn:1 implementation:2 publ:1 observation:1 datasets:8 discarded:1 finite:7 reproducing:2 arbitrary:3 david:2 bk:2 pair:2 introduced:1 paris:1 crime:1 theofanis:1 established:1 nip:1 below:1 pattern:10 regime:1 max:2 tau:1 memory:1 power:13 difficulty:1 rely:1 normality:1 naive:1 review:1 prior:2 l2:1 discovery:1 multiplication:1 asymptotic:1 loss:1 par:1 generation:1 rejoinder:1 versus:2 mldata:1 consistent:1 autompg:1 mcvean:1 uncorrelated:1 pi:2 share:1 row:1 last:1 free:1 formal:1 allow:1 institute:1 distributed:3 tolerance:2 dimension:4 xn:4 world:5 cumulative:2 default:1 kz:1 ignores:1 author:1 commonly:1 collection:1 regressors:1 projected:1 approximate:1 bernhard:1 supremum:3 sz:2 approxi:1 overfitting:1 proximation:1 b1:2 assumed:1 statistische:1 xi:5 alternatively:1 spectrum:1 continuous:4 search:2 table:7 angewandte:1 expansion:2 posing:1 automobile:1 da:1 pk:2 main:1 fastfood:1 noise:5 hyperparameters:1 edition:2 n2:3 w1t:2 x1:11 augmented:1 nmse:2 referred:1 site:1 borel:5 lc:2 mardia:1 jmlr:3 renyi:1 theorem:9 rk:1 minute:1 inset:1 showing:1 list:1 bivariate:3 intractable:1 corr:1 sigmoids:1 demand:2 sorting:2 depicted:2 univariate:1 expressed:1 scalar:3 springer:5 radically:1 cdf:3 conditional:2 sized:1 lipschitz:1 change:1 included:3 infinite:2 typical:1 operates:1 uniformly:3 wt:3 averaging:1 specifically:1 lemma:3 invariance:1 experimental:1 attempted:1 meaningful:1 formally:1 repository2:1 cholesky:1 absolutely:1 repartition:1 marge:1 correlated:1 |
4,574 | 5,139 | Sparse Additive Text Models with Low Rank
Background
Lei Shi
Baidu.com, Inc.
P.R. China
[email protected]
Abstract
The sparse additive model for text modeling involves the sum-of-exp computing,
whose cost is consuming for large scales. Moreover, the assumption of equal background across all classes/topics may be too strong. This paper extends to propose
sparse additive model with low rank background (SAM-LRB) and obtains simple yet efficient estimation. Particularly, employing a double majorization bound,
we approximate log-likelihood into a quadratic lower-bound without the log-sumexp terms. The constraints of low rank and sparsity are then simply embodied by
nuclear norm and ?1 -norm regularizers. Interestingly, we find that the optimization task of SAM-LRB can be transformed into the same form as in Robust PCA.
Consequently, parameters of supervised SAM-LRB can be efficiently learned using an existing algorithm for Robust PCA based on accelerated proximal gradient.
Besides the supervised case, we extend SAM-LRB to favor unsupervised and multifaceted scenarios. Experiments on three real data demonstrate the effectiveness
and efficiency of SAM-LRB, compared with a few state-of-the-art models.
1
Introduction
Generative models of text have gained large popularity in analyzing a large collection of documents
[3, 4, 17]. This type of models overwhelmingly rely on the Dirichlet-Multinomial conjugate pair,
perhaps mainly because its formulation and estimation is straightforward and efficient. However,
the ease of parameter estimation may come at a cost: unnecessarily over-complicated latent structures and lack of robustness to limited training data. Several efforts emerged to seek alternative
formulations, taking the correlated topic models [13, 19] for instance.
Recently in [10], the authors listed three main problems with Dirichlet-Multinomial generative models, namely inference cost, overparameterization, and lack of sparsity. Motivated by them, a Sparse
Additive GEnerative model (SAGE) was proposed in [10] as an alternative choice of generative model. Its core idea is that the lexical distribution in log-space comes by adding the background distribution with sparse deviation vectors. Successfully applying SAGE, effort [14] discovers geographical
topics in the twitter stream, and paper [25] detects communities in computational linguistics.
However, SAGE still suffers from two problems. First, the likelihood and estimation involve the
sum-of-exponential computing due to the soft-max generative nature, and it would be time consuming for large scales. Second, SAGE assumes one single background vector across all classes/topics,
or equivalently, there is one background vector for each class/topic but all background vectors are
constrained to be equal. This assumption might be too strong in some applications, e.g., when lots
of synonyms vary their distributions across different classes/topics.
Motivated to solve the second problem, we are propose to use a low rank constrained background.
However, directly assigning the low rank assumption to the log-space is difficult. We turn to approximate the data log-likelihood of sparse additive model by a quadratic lower-bound based on the
1
double majorization bound in [6], so that the costly log-sum-exponential computation, i.e., the first
problem of SAGE, is avoided. We then formulate and derive learning algorithm to the proposed
SAM-LRB model. Main contributions of this paper can be summarized into four-fold as below:
? Propose to use low rank background to extend the equally constrained setting in SAGE.
? Approximate the data log-likelihood of sparse additive model by a quadratic lower-bound
based on the double majorization bound in [6], so that the costly log-sum-exponential computation is avoided.
? Formulate the constrained optimization problem into Lagrangian relaxations, leading to a
form exactly the same as in Robust PCA [28]. Consequently, SAM-LRB can be efficiently
learned by employing the accelerated proximal gradient algorithm for Robust PCA [20].
? Extend SAM-LRB to favor supervised classification, unsupervised topic model and multifaceted model; conduct experimental comparisons on real data to validate SAM-LRB.
2
Supervised Sparse Additive Model with Low Rank Background
2.1 Supervised Sparse Additive Model
Same as in SAGE [10], the core idea of our model is that the lexical distribution in log-space comes
from adding the background distribution with additional vectors. Particularly, we are given documents D documents over M words. For each document d ? [1, D], let yd ? [1, K] represent
M
the class label
P in the current supervised scenario, cd ? R+ denote the vector of term counts,
and Cd =
w cdw be the total term count. We assume each class k ? [1, K] has two vectors
bk , sk ? RM , denoting the background and additive distributions in log-space, respectively. Then
the generative distribution for each word w in a document d with label yd is a soft-max form:
exp(byd w + syd w )
p(w|yd ) = p(w|yd , byd , syd ) = PM
.
(1)
i=1 exp(byd i + syd i )
Given ? = {B, S} with B = [b1 , . . . , bK ] and S = [s1 , . . . , sK ], the log-likelihood of data X is:
L = log p(X |?) =
K
X
X
L(d, k),
L(d, k) = c?
d (bk + sk ) ? Cd log
k=1 d:yd =k
M
X
exp(bki + ski ). (2)
i=1
Similarly, a testing document d is classified into class y?(d) according to y?(d) = arg maxk L(d, k).
In SAGE [10], the authors further assumed that the background vectors across all classes are the
same, i.e., bk = b for ?k, and each additive vector sk is sparse. Although intuitive, the background
equality assumption may be too strong for real applications. For instance, to express a same/similar
meaning, different classes of documents may choose to use different terms from a tuple of synonyms.
In this case, SAGE would tend to include these terms as the sparse additive part, instead of as the
background. Taking Fig. 1 as an illustrative example, the log-space distribution (left) is the sum
of the low-rank background B (middle) and the sparse S (right). Applying SAGE to this type of
data, the equality constrained background B would fail to capture the low-rank structure, and/or the
additive part S would be not sparse, so that there may be risks of over-fitting or under-fitting.
Moreover, since there exists sum-of-exponential terms in Eq. (2) and thus also in its derivatives, the
computing cost becomes huge when the vocabulary size M is large. As a result, although performing
well in [10, 14, 25], SAGE might still suffer from problems of over-constrain and inefficiency.
Figure 1: Low rank background.
Left to right illustrates the logspace distr., background B, and
sparse S, resp. Rows index
terms, and columns for classes.
Figure 2: Lower-bound?s optimization. Left to right
shows the trajectory of lower-bound, ?, and ?, resp.
2
2.2
Supervised Sparse Additive Model with Low Rank Background
Motivated to avoid the inefficient computing due to sum-of-exp, we adopt the double majorization
lower-bound of L [6], so that it is well approximated and quadratic w.r.t. B and S. Further based on
this lower-bound, we proceed to assume the background B across classes is low-rank, in contrast to
the equality constraint in SAGE. An optimization algorithm is proposed based on proximal gradient.
2.2.1 Double Majorization Quadratic Lower Bound
In the literature, there have been several existing efforts on efficient computing the sum-of-exp term involved in soft-max
based on the convexity of logarithm, one can
P [5, 15, 6]. For instance,
P
obtain a bound ? log i exp(xi ) ? ?? i exp(xi ) + log ? + 1 for any ? ? R+ , namely the
lb-log-cvx bound. Moreover, via upper-bounding the Hessian matrix, one can obtain the following local quadratic approximation for any ??i ? R, shortly named as lb-quad-loc:
P
M
X
X
X
X
(xi ? ?i ) exp(?i )
1 X
2
2
?log
exp(xi ) ?
? log
exp(?i ).
(
xi ?
?i ) ? (xi ??i ) ? i P
M i
i exp(?i )
i=1
i
i
i
In [6], Bouchard proposed the following quadratic lower-bound by double majorization
(lb-quad-dm) and demonstrated its better approximation compared with the previous two:
? log
M
X
M
exp(xi ) ? ?? ?
i=1
1 X
xi ? ? ? ?i + f (?i )[(xi ? ?)2 ? ?i2 ] + 2 log[exp(?i ) + 1] , (3)
2 i=1
with ? ? R and ? ? RM
+ being auxiliary (variational) variables, and f (?) =
bound is closely related to the bound proposed by Jaakkola and Jordan [6].
1
2?
?
exp(?)?1
exp(?)+1 .
This
Employing Eq. (3), we obtain a lower-bound Llb ? L to the data log-likelihood in Eq. (2):
Llb
=
K
X
?(bk + sk )? Ak (bk + sk ) ? ?k? (bk + sk ) ? ?k ,
k=1(
)
M
X
1
2
?k + ?ki + f (?ki )(?k2 ? ?ki
) + 2 log(exp(?ki ) + 1) ,
with ?k = C?k ?k ?
2 i=1
X
X
1
Ak = C?k diag [f (?k )] , ?k = C?k ( ? ?k f (?k )) ?
cd , C?k =
Cd . (4)
2
d:yd =k
d:yd =k
For each class k, the two variational variables, ?k ? R and ?k ? RM
+ , can be updated iteratively as
below for a better approximated lower-bound. Therein, abs(?) denotes the absolute value operator.
"
#
M
X
M
1
(bki + ski )f (?ki ) ,
?k = abs(bk + sk ? ?k ).
(5)
?1+
?k = P M
2
i=1 f (?ki )
i=1
One example of the trajectories during optimizing this lower-bound is illustrated in Fig. 2. Particularly, the left shows the lower-bound converges quickly to ground truth, usually within 5 rounds
in our experiences. The values of the three lower-bounds with randomly sampled the variational variables are also sorted and plotted. One can find that lb-quad-dm approximates better or
comparably well even with a random initialization. Please see [6] for more comparisons.
2.2.2 Supervised SAM-LRB Model and Optimization by Proximal Gradient
Rather than optimizing the data log-likelihood in Eq. (2) like in SAGE, we turn to optimize its
lower-bound in Eq. (4), which is convenient for further assigning the low-rank constraint on B and
the sparsity constraint on S. Concretely, our target is formulated as a constrained optimization task:
max
B,S
s.t.
Llb ,
with Llb specified in Eq. (4),
B = [b1 , . . . , bK ]
is low rank,
S = [s1 , . . . , sK ]
is sparse.
(6)
Concerning the two constraints, we call the above as supervised Sparse Additive Model with LowRank Background, or supervised SAM-LRB for short. Although both of the two assumptions can
3
be tackled via formulating a fully generative model, assigning appropriate priors, and delivering
inference in a Bayesian manner similar to [8], we determine to choose the constrained optimization
form for not only a clearer expression but also a simpler and efficient algorithm.
In the literature, there have been several efforts considering both low rank and sparse constraints
similar to Eq. (6), most of which take the use of proximal gradient [2, 7]. Papers [20, 28] studied the
problems under the name of Robust Principal Component Analysis (RPCA), aiming to decouple an
observed matrix as the sum of a low rank matrix and a sparse matrix. Closely related to RPCA,
our scenario in Eq. (6) can be regarded as a weighted RPCA formulation, and the weights are
controlled by variational variables. In [24], the authors proposed an efficient algorithm for problems
that constrain a matrix to be both low rank and sparse simultaneously.
Following these existing works, we adopt the nuclear norm to implement the low rank constraint, and
?1 -norm for the sparsity constraint, respectively. Letting the partial derivative w.r.t. ?k = (bk + sk )
of Llb equal to zero, the maximum of Llb can be achieved at ??k = ? 21 A?1
k ?k . Since Ak is
positive definite and diagonal, the optimal solution ??k is well-posed and can be efficiently computed.
Simultaneously considering the equality ?k = (bk + sk ), the low rank on B and the sparsity on S,
one can rewritten Eq. (6) into the following Lagrangian form:
min
B,S
1
2
||?? ? B ? S||F + ?(||B||? + ?|S|1 ),
2
with ?? = [??1 , . . . , ??K ],
(7)
where ||?||F , ||?||? and | ? |1 denote the Frobenius norm, nuclear norm and ?1 -norm, respectively.
The Frobenius norm term concerns the accuracy of decoupling from ?? into B and S. Lagrange
multipliers ? and ? control the strengths of low rank constraint and sparsity constraint, respectively.
Interestingly, Eq. (7) is exactly the same as the objective of RPCA [20, 28]. Paper [20] proposed an
algorithm for RPCA based on accelerated proximal gradient (APG-RPCA), showing its advantages
of efficiency and stability over (plain) proximal gradient. We choose it, i.e., Algorithm 2 in [20], for
seeking solutions to Eq. (7). The computations involved in APG-RPCA include SVD decomposition
and absolute value thresholding, and interested readers are referred to [20] for more details. The
augmented Lagrangian and alternating direction methods [9, 29] could be considered as alternatives.
Data: Term counts and labels {cd , Cd , yd }D
d=1 of D docs and K classes, sparse thres. ? ? 0.05
Result: Log-space distributions: low-rank B and sparse S
Initialization: randomly initialize parameters {B, S}, and variational variables {?k , ?k }k ;
while not converge do
if optimize variational variables then iteratively update {?k , ?k }k according to Eq. (5);
for k = 1, . . . , K do calculate Ak and ?k by Eq. (4), and ??k = ? 12 A?1
k ?k ;
B, S ?? APG-RPCA(?? , ?) by Algorithm 2 in [20], with ?? = [??1 , . . . , ??K ];
end
Algorithm 1: Supervised SAM-LRB learning algorithm
Consequently, the supervised SAM-LRB algorithm is specified in Algorithm 1. Therein, one can
choose to either fix or update the variational variables {?k , ?k }k . If they are fixed, Algorithm 1
has only one outer iteration with no need to check the convergence. Compared with the supervised
SAGE learning algorithm in Sec. 3 of [10], our supervised SAM-LRB algorithm not only does not
need to compute the sum of exponentials so that computing cost is saved, but also is optimized simply and efficiently by proximal gradient instead of using Newton updating as in SAGE. Moreover,
adding Laplacian-Exponential prior on S for sparseness, SAGE updates the conjugate posteriors and
needs to employ a ?warm start? technique to avoid being trapped in early stages with inappropriate
initializations, while in contrast SAM-LRB does not have this risk. Additionally, since the evolution
from SAGE to SAM-LRB is two folded, i.e., the low rank background assumption and the convex
relaxation, we find that adopting the convex relaxation also helps SAGE during optimization.
3
Extensions
Analogous to [10], our SAM-LRB formulation can be also extended to unsupervised topic modeling
scenario with latent variables, and the scenario with multifaceted class labels.
4
3.1
Extension 1: Unsupervised Latent Variable Model
We consider how to incorporate SAM-LRB in a latent variable model of unsupervised text modelling. Following topic models, there is one latent vector of topic proportions per document and
one latent discrete variable per term. That is, each document d is endowed with a vector of topic
proportions ?d ? Dirichlet(?), and each term w in this document is associated with a latent topic
(d)
label zw ? Multinomial(?d ). Then the probability distribution for w is
(d)
p(w|zw
, B, S) ? exp bz(d) w + sz(d) w ,
(8)
w
w
(d)
which only replaces the known class label yd in Eq. (1) with the unknown topic label zw .
We can combine the mean field variational inference for latent Dirichlet allocation (LDA) [4] with
the lower-bound treatment in Eq. (4), leading to the following unsupervised lower-bound
Llb
=
K
X
?(bk + sk )? Ak (bk + sk ) ? ?k? (bk + sk ) ? ?k
k=1
i
X
XXh
(d)
(d)
+
[hlog p(?d |?)i ? hlog Q(?d )i] +
hlog p(zw
|?d )i ? hlog Q(zw
)i ,
w
d
)
M
X
1
2
2
?
?k + ?ki + f (?ki )(?k ? ?ki ) + 2 log(exp(?ki ) + 1) ,
with ?k = Ck ?k ?
2 i=1
1
Ak = C?k diag [f (?k )] ,
?k = C?k ( ? ?k f (?k )) ? c?k ,
(9)
2
P
where each w-th item in c?k is c?kw = d Q(k|d, w)cdw , i.e. the expected count of term w in topic
P
k, and C?k = w c?kw is the topic?s expected total count throughout all words.
d
(
This unsupervised SAM-LRB model formulates a topic model with low rank background and sparse
(d)
deviation, which is learned via EM iterations. The E-step to update posteriors Q(?d ) and Q(zw ) is
identical to the standard LDA. Once {Ak , ?k } are computed as above, the M-step to update {B, S}
and variational variables {?k , ?k }k remains the same as the supervised case in Algorithm 1.
3.2
Extension 2: Multifaceted Modelling
We consider how SAM-LRB can be used to combine multiple facets (multi-dimensional class labels), i.e, combining per-word latent topics and document labels and pursuing a structural view of labels
and topics. In the literature, multifaceted generative models have been studied in [1, 21, 23], and they
incorporated latent switching variables that determine whether each term is generated from a topic
or from a document label. Topic-label interactions can also be included to capture the distributions
of words at the intersections. However in this kind of models, the number of parameters becomes
very large for large vocabulary size, many topics, many labels. In [10], SAGE needs no switching
variables and shows advantageous of model sparsity on multifaceted modeling. More recently, paper
[14] employs SAGE and discovers meaningful geographical topics in the twitter streams.
Applying SAM-LRB to the multifaceted scenario, we still assume the multifaceted variations are
composed of low rank background and sparse deviation. Particularly, for each topic k ? [1, K],
(T )
(T )
we have the topic background bk and sparse deviation sk ; for each label j ? [1, J], we have
(L)
(L)
label background bj and sparse deviation sj ; for each topic-label interaction pair (k, j), we
(I)
(T )
(T )
have only the sparse deviation skj . Again, background distributions B (T ) = [b1 , . . . , bK ] and
(L)
(L)
B (L) = [b1 , . . . , bJ ] are assumed of low ranks to capture single view?s distribution similarity.
(d)
Then for a single term w given the latent topic zw and the class label yd , its generative probability
is obtained by summing the background and sparse components together:
(I)
(T )
(T )
(L)
(d)
,
(10)
p(w|zw
, yd , ?) ? exp b (d) + s (d) + b(L)
yd w + syd w + s (d)
zw w
zw w
5
z w yd w
with parameters ? = {B (T ) , S (T ) , B (L) , S (L) , S (I) }. The log-likelihood?s lower-bound involves
the sum through all topic-label pairs:
Llb
=
J
K X
X
?
???
kj Akj ?kj ? ?kj ?kj ? ?kj
k=1 j=1
+
X
[hlog p(?d |?)i ? hlog Q(?d )i] +
d
with
(T )
?kj , bk
(T )
+ sk
(L)
+ bj
(L)
+ sj
XXh
d
(I)
+ skj .
w
i
(d)
(d)
hlog p(zw
|?d )i ? hlog Q(zw
)i ,
(11)
In the quadratic form, the values of Akj , ?kj and ?kj are trivial combination of Eq. (4) and Eq. (9),
i.e., weighted by both the observed labels and posteriors of latent topics. Details are omitted here
due to space limit. The second row remains the same as in Eq. (9) and standard LDA.
During the iterative estimation, every iteration includes the following steps:
(d)
? Estimate the posteriors Q(zw ) and Q(?d );
? With (B (T ) , S (T ) , S (I) ) fixed, solve a quadratic program over ??(L) , which approximates
the sum of B (L) and S (L) . Put ??(L) into Algorithm 1 to update B (L) and S (L) ;
? With (B (L) , S (L) , S (I) ) fixed, solve a quadratic program over ??(T ) , which approximates
the sum of B (T ) and S (T ) . Put ??(T ) into Algorithm 1 to update B (T ) and S (T ) ;
? With (B (T ) , S (T ) , B (L) , S (L) ) fixed, update S (I) by proximal gradient.
4
Experimental Results
In order to test SAM-LRB in different scenarios, this section considers experiments under three
tasks, namely supervised document classification, unsupervised topic modeling, and multi-faceted
modeling and classification, respectively.
4.1
Document Classification
We first test our SAM-LRB model in the supervised document modeling scenario and evaluate
the classification accuracy. Particularly, the supervised SAM-LRB is compared with the DirichletMultinomial model and SAGE. The precision of the Dirichlet prior in Dirichlet-Multinomial model
is updated by the Newton optimization [22]. Nonparametric Jeffreys prior [12] is adopted in SAGE
as a parameter-free sparse prior. Concerning the variational variables {?i , ?i }i in the quadratic
lower-bound of SAM-LRB, both cases of fixing them and updating them are considered.
We consider the benchmark 20Newsgroups data1 , and aim to classify unlabelled newsgroup postings into 20 newsgroups. No stopword filtering is performed, and we randomly pick a vocabulary
of 55,000 terms. In order to test the robustness, we vary the proportion of training data. After 5
independent runs by each algorithm, the classification accuracies on testing data are plotted in Fig. 3
in terms of box-plots, where the lateral axis varies the training data proportion.
Figure 3: Classification accuracy on 20Newsgroups data. The proportion of training data varies in {10%, 30%, 50%}.
1
Following [10], we use the training/testing sets from http://people.csail.mit.edu/jrennie/20Newsgroups/
6
One can find that, SAGE outperforms Dirichlet-Multinomial model especially in case of limited
training data, which is consistent to the observations in [10]. Moreover, with random and fixed
variational variables, the SAM-LRB model performs further better or at least comparably well. If
the variational variables are updated to tighten the lower-bound, the performance of SAM-LRB is
substantially the best, with a 10%?20% relative improvement over SAGE. Table 1 also reports the
average computing time of SAGE and SAM-LRB. We can see that, by avoiding the log-sum-exp
calculation, SAM-LRB (fixed) performs more than 7 times faster than SAGE, while SAM-LRB
(optimized) pays for updating the variational variables.
Table 1: Comparison on average time costs per iteration (in minutes).
method
SAGE SAM-LRB (fixed) SAM-LRB (optimized)
time cost (minutes)
3.8
0.6
3.3
4.2
Unsupervised Topic Modeling
We now apply our unsupervised SAM-LRB model to the benchmark NIPS data2 . Following the
same preprocessing and evaluation as in [10, 26], we have a training set of 1986 documents with
237,691 terms, and a testing set of 498 documents with 57,427 terms.
For consistency, SAM-LRB is still compared with Dirichlet-Multinomial model (variational LDA
model with symmetric Dirichlet prior) and SAGE. For all these unsupervised models, the number
of latent topics is varied from 10 to 25 and then to 50. After unsupervised training, the performance
is evaluated by perplexity, the smaller the better. The performances of 5 independent runs by each
method are illustrated in Fig. 4, again in terms of box-plots.
Figure 4: Perplexity results on NIPS data.
As shown, SAGE performs worse than LDA when there are few number of topics, perhaps mainly
due to its strong equality assumption on background. Whereas, SAM-LRB performs better than
both LDA and SAGE in most cases. With one exception happens when the topic number equals 50,
SAM-LRB (fixed) performs slightly worse than SAGE, mainly caused by inappropriate fixed values
of variational variables. If updated instead, SAM-LRB (optimized) performs promisingly the best.
4.3
Multifaceted Modeling
We then proceed to test the multifaceted modeling by SAM-LRB. Same as [10], we choose a
publicly-available dataset of political blogs describing the 2008 U.S. presidential election3 [11].
Out of the total 6 political blogs, three are from the right and three are from left. There are 20,827
documents and a vocabulary size of 8284. Using four blogs for training, our task is to predict the
ideological perspective of two unlabeled blogs.
On this task, Ahmed and Xing in [1] used multiview LDA model to achieve accuracy within
65.0% ? 69.1% depending on different topic number settings. Also, support vector machine provides a comparable accuracy of 69%, while supervised LDA [3] performs undesirably on this task.
In [10], SAGE is repeated 5 times for each of multiple topic numbers, and achieves its best median
2
3
http://www.cs.nyu.edu/?roweis/data.html
http://sailing.cs.cmu.edu/socialmedia/blog2008.html
7
result 69.6% at K = 30. Using SAM-LRB (optimized), the median results out of 5 runs for each
topic number are shown in Table 2. Interestingly, SAM-LRB provides a similarly state-of-the-art
result, while achieving it at K = 20. The different preferences on topic numbers between SAGE and
SAM-LRB may mainly come from their different assumptions on background lexical distributions.
Table 2: Classification accuracy on political blogs data by SAM-LRB (optimized).
# topic (K)
10
20
30
40
50
accuracy (%) median out of 5 runs 67.3 69.8 69.1 68.3 68.1
5
Concluding Remarks
This paper studies the sparse additive model for document modeling. By employing the double majorization technique, we approximate the log-sum-exponential term involved in data log-likelihood
into a quadratic lower-bound. With the help of this lower-bound, we are able to conveniently relax
the equality constraint on background log-space distribution of SAGE [10], into a low-rank constraint, leading to our SAM-LRB model. Then, after the constrained optimization is transformed
into the form of RPCA?s objective function, an algorithm based on accelerated proximal gradient
is adopted during learning SAM-LRB. The model specification and learning algorithm are somewhat simple yet effective. Besides the supervised version, extensions of SAM-LRB to unsupervised
and multifaceted scenarios are investigated. Experimental results demonstrate the effectiveness and
efficiency of SAM-LRB compared with Dirichlet-Multinomial and SAGE.
Several perspectives may deserve investigations in future. First, the accelerated proximal gradient
updating needs to compute SVD decompositions, which are probably consuming for very large scale
data. In this case, more efficient optimization considering nuclear norm and ?1 -norm are expected,
with the semidefinite relaxation technique in [16] being one possible choice. Second, this paper
uses a constrained optimization formulation, while Bayesian tackling via adding conjugate priors to
complete the generative model similar to [8] is an alternative choice. Moreover, we may also adopt
nonconjugate priors and employ nonconjugate variational inference in [27]. Last but not the least,
discriminative learning with large margins [18, 30] might be also equipped for robust classification.
Since nonzero elements of sparse S in SAM-LRB can be also regarded as selected feature, one
may design to include them into the discriminative features, rather than only topical distributions
[3]. Additionally, the augmented Lagrangian and alternating direction methods [9, 29] could be also
considered as alternatives to the proximal gradient optimization.
References
[1] A. Ahmed and E. P. Xing. Staying informed: supervised and semi-supervised multi-view
topical analysis of ideological pespective. In Proc. EMNLP, pages 1140?1150, 2010.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[3] D. Blei and J. McAuliffe. Supervised topic models. In Advances in NIPS, pages 121?128.
2008.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[5] D. Bohning. Multinomial logistic regression algorithm. Annals of Inst. of Stat. Math., 44:197?
200, 1992.
[6] G. Bouchard. Efficient bounds for the softmax function, applications to inference in hybrid
models. In Workshop for Approximate Bayesian Inference in Continuous/Hybrid Systems at
NIPS?07, 2007.
[7] X. Chen, Q. Lin, S. Kim, J. G. Carbonell, and E. P. Xing. Smoothing proximal gradient method
for general structured sparse regression. The Annals of Applied Statistics, 6(2):719?752, 2012.
[8] X. Ding, L. He, and L. Carin. Bayesian robust principal component analysis. IEEE Trans.
Image Processing, 20(12):3419?3430, 2011.
8
[9] J. Eckstein. Augmented Lagrangian and alternating direction methods for convex optimization:
A tutorial and some illustrative computational results. Technical report, RUTCOR Research
Report RRR 32-2012, 2012.
[10] J. Eisenstein, A. Ahmed, and E. P. Xing. Sparse additive generative models of text. In Proc.
ICML, 2011.
[11] J. Eisenstein and E. P. Xing. The CMU 2008 political blog corpus. Technical report, Carnegie
Mellon University, School of Computer Science, Machine Learning Department, 2010.
[12] M. A. T. Figueiredo. Adaptive sparseness using Jeffreys prior. In Advances in NIPS, pages
679?704. 2002.
[13] M. R. Gormley, M. Dredze, B. Van Durme, and J. Eisner. Shared components topic models.
In Proc. NAACL-HLT, pages 783?792, 2012.
[14] L. Hong, A. Ahmed, S. Gurumurthy, A. J. Smola, and K. Tsioutsiouliklis. Discovering geographical topics in the twitter stream. In Proc. 12th WWW, pages 769?778, 2012.
[15] T. Jaakkola and M. I. Jordan. A variational approach to Bayesian logistic regression problems
and their extensions. In Proc. AISTATS, 1996.
[16] M. Jaggi and M. Sulovsk`y. A simple algorithm for nuclear norm regularized problems. In
Proc. ICML, pages 471?478, 2010.
[17] Y. Jiang and A. Saxena. Discovering different types of topics: Factored topics models. In Proc.
IJCAI, 2013.
[18] A. Joulin, F. Bach, and J. Ponce. Efficient optimization for discriminative latent class models.
In Advances in NIPS, pages 1045?1053. 2010.
[19] J. D. Lafferty and M. D. Blei. Correlated topic models. In Advances in NIPS, pages 147?155,
2006.
[20] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms
for exact recovery of a corrupted low-rank matrix. Technical report, UIUC Technical Report
UILU-ENG-09-2214, August 2009.
[21] Q. Mei, X. Ling, M. Wondra, H. Su, and C. X. Zhai. Topic sentiment mixture: modeling facets
and opinions in webblogs. In Proc. WWW, 2007.
[22] T. P. Minka. Estimating a dirichlet distribution. Technical report, Massachusetts Institute of
Technology, 2003.
[23] M. Paul and R. Girju. A two-dimensional topic-aspect model for discovering multi-faceted
topics. In Proc. AAAI, 2010.
[24] E. Richard, P.-A. Savalle, and N. Vayatis. Estimation of simultaneously sparse and low rank
matrices. In Proc. ICML, pages 1351?1358, 2012.
[25] Y. S. N. A. Smith and D. A. Smith. Discovering factions in the computational linguistics
community. In ACL Workshop on Rediscovering 50 Years of Discoveries, 2012.
[26] C. Wang and D. Blei. Decoupling sparsity and smoothness in the discrete hierarchical dirichlet
process. In Advances in NIPS, pages 1982?1989. 2009.
[27] C. Wang and D. M. Blei. Variational inference in nonconjugate models. To appear in JMLR.
[28] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma. Robust principal component analysis: Exact
recovery of corrupted low-rank matrices via convex optimization. In Advances in NIPS, pages
2080?2088. 2009.
[29] J. Yang and X. Yuan. Linearized augmented Lagrangian and alternating direction methods for
nuclear norm minimization. Math. Comp., 82:301?329, 2013.
[30] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: maximum margin supervised topic models.
JMLR, 13:2237?2278, 2012.
9
| 5139 |@word version:1 middle:1 advantageous:1 norm:12 proportion:5 seek:1 linearized:1 decomposition:2 eng:1 thres:1 pick:1 inefficiency:1 loc:1 denoting:1 document:19 interestingly:3 outperforms:1 existing:3 current:1 com:1 yet:2 assigning:3 tackling:1 additive:16 plot:2 update:8 generative:11 selected:1 discovering:4 item:1 data2:1 smith:2 core:2 short:1 blei:5 provides:2 math:2 preference:1 simpler:1 baidu:2 yuan:1 fitting:2 combine:2 manner:1 peng:1 expected:3 faceted:2 uiuc:1 multi:4 lrb:47 detects:1 quad:3 inappropriate:2 considering:3 equipped:1 becomes:2 estimating:1 moreover:6 kind:1 substantially:1 informed:1 savalle:1 every:1 saxena:1 exactly:2 rm:3 k2:1 control:1 appear:1 xxh:2 mcauliffe:1 positive:1 local:1 limit:1 aiming:1 switching:2 ak:7 analyzing:1 jiang:1 yd:13 might:3 acl:1 therein:2 china:1 initialization:3 studied:2 ease:1 limited:2 faction:1 testing:4 implement:1 definite:1 mei:1 convenient:1 word:5 unlabeled:1 operator:1 rutcor:1 put:2 risk:2 applying:3 optimize:2 www:3 lagrangian:6 shi:1 lexical:3 demonstrated:1 straightforward:1 convex:5 formulate:2 recovery:2 factored:1 regarded:2 nuclear:6 stability:1 rediscovering:1 variation:1 analogous:1 updated:4 resp:2 target:1 annals:2 exact:2 us:1 promisingly:1 element:1 approximated:2 particularly:5 updating:4 skj:2 observed:2 logspace:1 ding:1 wang:2 capture:3 calculate:1 convexity:1 stopword:1 efficiency:3 fast:2 effective:1 whose:1 emerged:1 posed:1 overparameterization:1 solve:3 relax:1 presidential:1 favor:2 statistic:1 tsioutsiouliklis:1 advantage:1 propose:3 interaction:2 combining:1 achieve:1 roweis:1 intuitive:1 frobenius:2 validate:1 convergence:1 double:7 ijcai:1 converges:1 staying:1 help:2 derive:1 depending:1 clearer:1 stat:1 fixing:1 durme:1 school:1 lowrank:1 eq:18 strong:4 auxiliary:1 c:2 involves:2 come:4 direction:4 closely:2 saved:1 opinion:1 fix:1 investigation:1 extension:5 considered:3 ground:1 wright:2 exp:21 bj:3 predict:1 vary:2 adopt:3 early:1 omitted:1 achieves:1 estimation:6 proc:10 rpca:9 label:19 successfully:1 weighted:2 minimization:1 mit:1 aim:1 rather:2 ck:1 avoid:2 shrinkage:1 jaakkola:2 overwhelmingly:1 gormley:1 ponce:1 improvement:1 rank:29 likelihood:9 mainly:4 check:1 modelling:2 contrast:2 political:4 kim:1 inst:1 inference:7 twitter:3 transformed:2 interested:1 syd:4 classification:9 arg:1 html:2 art:2 constrained:9 initialize:1 softmax:1 smoothing:1 equal:4 field:1 once:1 ng:1 identical:1 kw:2 unnecessarily:1 unsupervised:13 carin:1 icml:3 future:1 report:7 richard:1 few:2 employ:3 girju:1 randomly:3 composed:1 simultaneously:3 beck:1 ab:2 huge:1 evaluation:1 mixture:1 semidefinite:1 regularizers:1 bki:2 tuple:1 partial:1 experience:1 conduct:1 logarithm:1 plotted:2 instance:3 column:1 modeling:11 soft:3 facet:2 classify:1 teboulle:1 rao:1 formulates:1 cost:7 uilu:1 deviation:6 too:3 varies:2 corrupted:2 sulovsk:1 proximal:13 geographical:3 siam:1 akj:2 csail:1 together:1 quickly:1 again:2 aaai:1 choose:5 emnlp:1 worse:2 derivative:2 leading:3 inefficient:1 summarized:1 sec:1 includes:1 inc:1 caused:1 stream:3 performed:1 view:3 lot:1 undesirably:1 start:1 xing:6 complicated:1 bouchard:2 majorization:7 om:1 contribution:1 publicly:1 accuracy:8 efficiently:4 bayesian:5 comparably:2 trajectory:2 comp:1 classified:1 suffers:1 hlt:1 involved:3 minka:1 dm:2 associated:1 sampled:1 dataset:1 treatment:1 massachusetts:1 supervised:24 nonconjugate:3 formulation:5 evaluated:1 box:2 stage:1 smola:1 su:1 ganesh:2 lack:2 cdw:2 logistic:2 lda:8 perhaps:2 lei:1 multifaceted:11 dredze:1 name:1 naacl:1 multiplier:1 evolution:1 equality:6 alternating:4 symmetric:1 iteratively:2 nonzero:1 i2:1 illustrated:2 round:1 during:4 please:1 illustrative:2 eisenstein:2 hong:1 multiview:1 complete:1 demonstrate:2 performs:7 meaning:1 variational:18 image:1 discovers:2 recently:2 data1:1 multinomial:8 sailing:1 extend:3 he:1 approximates:3 mellon:1 smoothness:1 consistency:1 pm:1 similarly:2 jrennie:1 specification:1 similarity:1 jaggi:1 posterior:4 perspective:2 optimizing:2 perplexity:2 scenario:9 blog:6 additional:1 somewhat:1 determine:2 converge:1 semi:1 multiple:2 technical:5 unlabelled:1 faster:1 calculation:1 ahmed:5 bach:1 lin:2 concerning:2 equally:1 controlled:1 laplacian:1 regression:3 cmu:2 bz:1 iteration:4 represent:1 adopting:1 llb:8 achieved:1 dirichletmultinomial:1 vayatis:1 background:32 whereas:1 median:3 zw:13 probably:1 tend:1 lafferty:1 effectiveness:2 jordan:3 call:1 structural:1 yang:1 newsgroups:4 idea:2 whether:1 motivated:3 pca:4 expression:1 effort:4 sentiment:1 suffer:1 proceed:2 hessian:1 remark:1 delivering:1 listed:1 involve:1 nonparametric:1 http:3 tutorial:1 trapped:1 popularity:1 per:4 discrete:2 carnegie:1 medlda:1 express:1 four:2 achieving:1 imaging:1 relaxation:4 sum:15 year:1 run:4 inverse:1 named:1 extends:1 throughout:1 reader:1 pursuing:1 wu:1 cvx:1 doc:1 comparable:1 bound:30 ki:10 apg:3 pay:1 tackled:1 fold:1 quadratic:12 replaces:1 strength:1 byd:3 constraint:12 constrain:2 aspect:1 min:1 formulating:1 concluding:1 performing:1 structured:1 department:1 according:2 combination:1 conjugate:3 across:5 smaller:1 em:1 sam:47 slightly:1 rrr:1 s1:2 happens:1 jeffreys:2 remains:2 turn:2 count:5 fail:1 describing:1 letting:1 end:1 adopted:2 available:1 rewritten:1 distr:1 endowed:1 apply:1 hierarchical:1 appropriate:1 alternative:5 robustness:2 shortly:1 assumes:1 dirichlet:13 linguistics:2 include:3 denotes:1 newton:2 eisner:1 especially:1 seeking:1 objective:2 costly:2 diagonal:1 gradient:13 lateral:1 outer:1 carbonell:1 topic:48 considers:1 trivial:1 besides:2 index:1 zhai:1 equivalently:1 difficult:1 hlog:8 sage:35 design:1 ski:2 unknown:1 upper:1 observation:1 benchmark:2 maxk:1 extended:1 incorporated:1 topical:2 varied:1 lb:4 august:1 community:2 bk:17 pair:3 namely:3 specified:2 eckstein:1 optimized:6 learned:3 ideological:2 nip:9 trans:1 deserve:1 able:1 below:2 usually:1 sparsity:8 program:2 max:4 rely:1 warm:1 hybrid:2 regularized:1 zhu:1 technology:1 axis:1 embodied:1 kj:8 text:5 prior:9 literature:3 discovery:1 relative:1 fully:1 allocation:2 filtering:1 consistent:1 thresholding:2 cd:7 row:2 last:1 free:1 figueiredo:1 bohning:1 institute:1 taking:2 absolute:2 sparse:34 van:1 plain:1 vocabulary:4 author:3 collection:1 concretely:1 preprocessing:1 avoided:2 adaptive:1 employing:4 tighten:1 sj:2 approximate:5 obtains:1 sz:1 b1:4 summing:1 assumed:2 corpus:1 consuming:3 xi:9 discriminative:3 continuous:1 latent:15 iterative:2 sk:16 table:4 additionally:2 nature:1 robust:8 decoupling:2 investigated:1 diag:2 aistats:1 joulin:1 main:2 synonym:2 bounding:1 ling:1 paul:1 repeated:1 augmented:4 fig:4 referred:1 precision:1 exponential:7 jmlr:3 posting:1 minute:2 showing:1 nyu:1 concern:1 exists:1 workshop:2 adding:4 gained:1 illustrates:1 sparseness:2 margin:2 chen:2 intersection:1 simply:2 conveniently:1 lagrange:1 truth:1 ma:2 sorted:1 formulated:1 consequently:3 shared:1 included:1 folded:1 decouple:1 principal:3 total:3 experimental:3 svd:2 meaningful:1 newsgroup:1 exception:1 people:1 support:1 accelerated:5 incorporate:1 evaluate:1 avoiding:1 correlated:2 |
4,575 | 514 | Hierarchies of adaptive experts
Robert A. Jacobs
Michael I. Jordan
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
In this paper we present a neural network architecture that discovers a
recursive decomposition of its input space. Based on a generalization of the
modular architecture of Jacobs, Jordan, Nowlan, and Hinton (1991), the
architecture uses competition among networks to recursively split the input
space into nested regions and to learn separate associative mappings within
each region. The learning algorithm is shown to perform gradient ascent
in a log likelihood function that captures the architecture's hierarchical
structure.
1
INTRODUCTION
Neural network learning architectures such as the multilayer perceptron and adaptive radial basis function (RBF) networks are a natural nonlinear generalization
of classical statistical techniques such as linear regression, logistic regression and
additive modeling. Another class of nonlinear algorithms, exemplified by CART
(Breiman, Friedman, Olshen, & Stone, 1984) and MARS (Friedman, 1990), generalizes classical techniques by partitioning the training data into non-overlapping
regions and fitting separate models in each of the regions. These two classes of algorithms extend linear techniques in essentially independent directions, thus it seems
worthwhile to investigate algorithms that incorporate aspects of both approaches
to model estimation. Such algorithms would be related to CART and MARS as
multilayer neural networks are related to linear statistical techniques. In this paper we present a candidate for such an algorithm. The algorithm that we present
partitions its training data in the manner of CART or MARS, but it does so in a
parallel, on-line manner that can be described as the stochastic optimization of an
appropriate cost functional.
985
986
Jordan and Jacobs
Why is it sensible to partition the training data and to fit separate models within
each of the partitions? Essentially this approach enhances the flexibility of the
learner and allows the data to influence the choice between local and global representations. For example, if the data suggest a discontinuity in the function being
approximated, then it may be more sensible to fit separate models on both sides of
the discontinuity than to adapt a global model across the discontinuity. Similarly,
if the data suggest a simple functional form in some region, then it may be more
sensible to fit a global model in that region than to approximate the function locally
with a large number of local models. Although global algorithms such as backpropagation and local algorithms such as adaptive RBF networks have some degree of
flexibility in the tradeoff that they realize between global and local representation,
they do not have the flexibility of adaptive partitioning schemes such as CART and
MARS.
In a previous paper we presented a modular neural network architecture in which
a number of "expert networks" compete to learn a set of training data (Jacobs,
Jordan, Nowlan & Hinton, 1991). As a result of the competition, the architecture
adaptively splits the input space into regions, and learns separate associative mappings within each region. The architecture that we discuss here is a generalization
of the earlier work and arises from considering what would be an appropriate internal structure for the expert networks in the competing experts architecture. In our
earlier work, the expert networks were multilayer perceptrons or radial basis function networks. If the arguments in support of data partitioning are valid, however,
then they apply equally well to a region in the input space as they do to the entire input space, and therefore each expert should itself be composed of competing
sub-experts. Thus we are led to consider recursively-defined hierarchies of adaptive
experts.
2
THE ARCHITECTURE
Figure 1 shows two hierarchical levels of the architecture. (We restrict ourselves to
two levels throughout the paper to simplify the exposition; the algorithm that we
develop, however, generalizes readily to trees of arbitrary depth). The architecture
has a number of expert networks that map from the input vector x to output
vectors Yij. There are also a number of gating networks that define the hierarchical
structure of the architecture. There is a gating network for each cluster of expert
networks and a gating network that serves to combine the outputs of the clusters.
The output of the ith cluster is given by
Yi
=L
gjliYij
(1)
j
where gjli is the activation of the ph output unit of the gating network in the
cluster. The output of the architecture as a whole is given by
ith
(2)
where gi is the activation of the
ith
output unit of the top-level gating network.
Hierarchies of adaptive experts
Gating
Network
Expert
Network
Gating
Network
Expert
Network
Gating
Network
9 1/2
9 2/2
Expert
Network
Expert
Network
Y22
Figure 1: Two hierarchical levels of adaptive experts. All of the expert networks
and all of the gating networks have the same input vector.
We assume that the outputs of the gating networks are given by the normalizing
softmax function (Bridle) 1989):
eS ,
gi
= 'Ii:""""
~j
e
(3)
S
J
and
gjli = Lk
eSkl.
(4)
where Si and Sjli are the weighted sums arriving at the output units of the corresponding gating networks.
The gating networks in the architecture are essentially classifiers that are responsible for partitioning the input space. Their choice of partition is based on the ability
987
988
Jordan and Jacobs
of the expert networks to model the input-output functions within their respective regions (as quantified by their posterior probabilities; see below). The nested
arrangement of gating networks in the architecture (cf. Figure 1) yields a nested
partitioning much like that found in CART or MARS. The architecture is a more
general mathematical object than a CART or MARS tree, however, given that the
gating networks have non-binary outputs and given that they may form nonlinear
decision surfaces.
3
THE LEARNING ALGORITHM
We derive a learning algorithm for our architecture by developing a probabilistic
model of a tree-structured estimation problem. The environment is assumed to be
characterized by a finite number of stochastic processes that map input vectors x
into output vectors y*. These processes are partitioned into nested collections of
processes that have commonalities in their input-output parameterizations. Data
are assumed to be generated by the model in the following way. For any given x,
collection i is chosen with probability 9i, and a particular process j is then chosen
with conditional probability 9jli. The selected process produces an output vector
y* according to the probability density f(y* I x; Yij), where Yij is a vector of
parameters. The total probability of generating y* is:
P(y* I x) =
L 9i L 9jld(Y* I x;
Yij),
(5)
j
where 9i, 9jli, and Yij are unknown nonlinear functions of x.
Treating the probability P(y* Ix) as a likelihood function in the unknown parameters 9i, 9j Ii, and Yij, we obtain a learning algorithm by using gradient ascent to
maximize the log likelihood. Let us assume that the probability density associated
with the residual vector (y* - Yij) is the multivariate normal density, where Yij is
the mean of the ph process of the ith cluster (or the (i, j)th expert network) and
Eij is its covariance matrix. Ignoring the constant terms in the normal density, the
log likelihoo d is:
In L = In
L 9i L 9jlil E ij I-~ e-!(Y*-Y'J)Tl:;-;l(Y*-Y'J).
(6)
j
(7)
which is the posterior probability that a process in the ith cluster generates a particular target vector y*. We also define the conditional posterior probability:
h. . Jlz -
"1- ~ e - ~(Y* - Y'J )Tl;;-;l (y* -Y'J)
9J?I?IE
i
iJ
L j 9J'I' I~ ZJ I- ~ e _'!'(Y*-Y
i
~'.
2
2
)Tl:-l(y*_y 'J )'
(8)
'J'J
which is the conditional posterior probability that the ph expert in the ith cluster
generates a particular target vector y*. Differentiating 6, and using Equations 3, 4,
Hierarchies of adaptive experts
7, and 8, we obtain the partial derivative of the log likelihood with respect to the
output of the (i,j)th expert network:
f)
In L
-?)-
UYij
= hi hjli (y
of<
- Yij).
(9)
This partial derivative is a supervised error term modulated by the appropriate
posterior probabilities . Similarly, the partial derivatives of the log likelihood with
respect to the weighted sums at the output units of the gating networks are given
by:
(10)
and
f) In L
- h?
'I' - g'I
')'
?) - ! (h J!
J!
(ll)
USjli
These derivatives move the prior probabilities associated with the gating networks
toward the corresponding posterior probabilities.
It is interesting to note that the posterior probability hi appears in the gradient for
the experts in the ith cluster (Equation 9) and in the gradient for the gating network
in the ith cluster (Equation 11). This ties experts within a cluster to each other and
implies that experts within a cluster tend to learn similar mappings early in the
training process. They differentiate later in training as the probabilities associated
with the cluster to which they belong become larger. Thus the architecture tends
to acquire coarse structure before acquiring fine structure, This feature of the
architecture is significant because it implies a natural robustness to problems with
overfitting in deep hierarchies.
We have also found it useful in practice to obtain an additional degree of control over
the coarse-to-fine development of the algorithm. This is achieved with a heuristic
that adjusts the learning rate at a given level of the tree as a function of the timeaverage entropy of the gating network at the next higher level of the tree:
j.l ' li(t
+ 1) = G:j.l'li(f) + f3(Mi + L
gili In gjli)
j
where Mi is the maximum possible entropy at level i of the tree. This equation
has the effect that the networks at level i + 1 are less inclined to diversify if the
superordinate cluster at level i has yet to diversify (where diversification is quantified
by the entropy of the gating network).
4
SIMULATIONS
We present simulation results from an unsupervised learning task and two supervised learning tasks.
In the unsupervised learning task, the problem was to extract regularities from a set
of measurements of leaf morphology. Two hundred examples of maple, poplar, oak,
and birch leaves were generated from the data shown in Table 1. The architecture
that we used had two hierarchical levels, two clusters of experts, and two experts
989
990
Jordan and Jacobs
Length
Width
Flare
Lobes
Margin
Apex
Base
Color
Maple
3,4,5,6
3,4,5
0
5
Entire
Acute
Truncate
Light
Poplar
1,2,3
1,2
0,1
1
Crenate, Serrate
Acute
Rounded
Yellow
Oak
5,6,7,8,9
2,3,4,5
0
7,9
Entire
Rounded
Cumeate
Light
Birch
2,3,4,5
1,2,3
1
1
Doubly-Serrate
Acute
Rounded
Dark
Table 1: Data used to generate examples of leaves from four types of trees. The
columns correspond to the type of tree; the rows correspond to the features of a
tree 's leaf. The table's entries give the possible values for each feature for each type
of leaf. See Preston (1976).
within each cluster. Each expert network was an auto-associator that maps fortyeight input units into forty-eight output units through a bottleneck of two hidden
units. Within the experts, backpropagation was used to convert the derivatives
in Equation 9 into changes to the weights. The gating networks at both levels
were affine. We found that the hierarchical architecture consistently discovers the
decomposition ,o f the data that preserves the natural classes of tree species (cf.
Preston, 1976). That is, within one cluster of expert networks, one expert learns
the maple training patterns and the other expert learns the oak patterns. Within the
other cluster, one expert learns the poplar patterns and the other expert learns the
birch patterns . Moreover , due to the use of the autoassociator experts, the hidden
unit representations within each expert are principal component decompositions
that are specific to a particular species of leaf.
We have also studied a supervised learning problem in which the learner must
predict the grayscale pixel values in noisy images of human faces based on values of
the pixels in surrounding 5x5 masks. There were 5000 masks in the training set. We
used a four-level binary tree, with affine experts (each expert mapped from twentyfive input units to a single output unit) and affine gating networks. We compared
the performance of the hierarchical architecture to CART and to backpropagation. 1
In the case of backpropagation and the hierarchical architecture, we utilized crossvalidation (using a test set of 5000 masks) to stop the iterative training procedure.
As shown in Figure 2, the performance of the hierarchical architecture is comparable
to backpropagation and better than CART.
Finally we also studied a system identification problem involving learning the simulated forward dynamics of a four-joint, three-dimensional robot arm. The task
was to predict the joint accelerations from the joint positions, sines and cosines of
joint positions, joint velocities, and torques. There were 6000 data items in the
training set. We used a four-level tree with trinary splits at the top two levels,
and binary splits at lower levels. The tree had affine experts (each expert mapped
1 Fifty hidden units were used in the backpropagation network, making the number of parameters in the backpropagation network and the hierarchical network roughly
comparable.
Hierarchies of adaptive experts
0.1 -
0.08 -
Relative 0.06Error
0.04 -
0.02 -
CART
BP
Hier4
Figure 2: The results on the image restoration task. The dependent measure is
relative error on the test set. (cf. Breiman , et al., 1984) .
from twenty input units to four output units) and affine gating networks. We once
again compared the performance of the hierarchical architecture to CART and to
back propagation . In the case of back propagation and the hierarchical architecture,
we utilized a conjugate gradient technique, and halted the training process after
1000 iterations. In the case of CART, we ran the algorithm four separate times on
the four output variables. Two of these runs produced 100 percent relative error,
a third produced 75 percent relative error , and the fourth (the most proximal joint
acceleration) yielded 46 percent relative error, which is the value we report in Figure 3. As shown in the figure, the hierarchical architecture and backpropagation
achieve comparable levels of performance.
5
DISCUSSION
In this paper we have presented a neural network learning algorithm that captures
aspects of the recursive approach to function approximation exemplified by algorithms such as CART and MARS . The results obtained thus far suggest that the
algorithm is computationally viable, comparing favorably to backpropagation in
terms of generalization performance on a set of small and medium-sized tasks. The
algorithm also has a number of appealing theoretical properties when compared to
backpropagation: In the affine case, it is possible to show that (1) no backward
propagation of error terms is required to adjust parameters in multi-level trees (cf.
the activation-dependence of the multiplicative terms in Equat.ions 9 and 11), (2)
all of the parameters in the tree are ma..ximum likelihood estimators . The latter
property suggests that the affine architecture may be a particularly suitable architecture in which to explore the effects of priors on the parameter space (cf. Nowlan
991
992
Jordan and Jacobs
0.6 -
0.4 Relative
Error
0.2 -
0.0-'---CART
BP
Hier4
Figure 3: The results on the system identification task.
& Hinton, this volume).
Acknowledgements
This project was supported by grant IRI-9013991 awarded by the National Science
Foundation, by a grant from Siemens Corporation, by a grant from ATR Auditory
and Visual Perception Research Laboratories, by a grant from the Human Frontier
Science Program, and by an NSF Presidential Young Investigator Award to the first
author.
References
Breiman, L., Friedman, J .H., Olshen, R.A., & Stone, C.J. (1984) Classification and
Regression Trees. Belmont, CA: Wadsworth International Group.
Bridle, J. (1989) Probabilistic interpretation of feedforward classification network
outputs, with relationships to statistical pattern recognition. In F. Fogelman-Soulie
& J. Herault (Eds.), Neuro-computing: Algorithms, Architectures, and Applications.
New York: Springer-Verlag.
Friedman, J .H. (1990) Multivariate adaptive regression splines.
Statistics, 19,1-141.
The Annals of
Jacobs, R.A, Jordan, M.L, Nowlan, S.J., & Hinton, G.E. (1991) Adaptive mixtures
of local experts. Neural Computation, 3, 79-87.
Preston, R.J. (1976) North American Trees (Third Edition). Ames, IA: Iowa State
University Press.
| 514 |@word autoassociator:1 seems:1 simulation:2 lobe:1 jacob:8 decomposition:3 covariance:1 recursively:2 trinary:1 comparing:1 nowlan:4 activation:3 si:1 yet:1 must:1 readily:1 realize:1 belmont:1 additive:1 partition:4 treating:1 selected:1 leaf:6 item:1 flare:1 ith:8 coarse:2 parameterizations:1 ames:1 oak:3 mathematical:1 become:1 viable:1 doubly:1 fitting:1 combine:1 manner:2 mask:3 roughly:1 morphology:1 brain:1 multi:1 torque:1 considering:1 project:1 moreover:1 medium:1 what:1 corporation:1 tie:1 classifier:1 partitioning:5 unit:13 control:1 grant:4 before:1 local:5 tends:1 studied:2 quantified:2 suggests:1 responsible:1 recursive:2 practice:1 backpropagation:10 procedure:1 radial:2 suggest:3 influence:1 map:3 eskl:1 maple:3 iri:1 adjusts:1 estimator:1 jli:2 annals:1 hierarchy:6 target:2 us:1 velocity:1 approximated:1 particularly:1 utilized:2 recognition:1 capture:2 region:10 inclined:1 equat:1 ran:1 environment:1 dynamic:1 learner:2 basis:2 joint:6 surrounding:1 modular:2 larger:1 heuristic:1 presidential:1 ability:1 statistic:1 gi:2 itself:1 noisy:1 associative:2 differentiate:1 flexibility:3 achieve:1 competition:2 crossvalidation:1 cluster:17 regularity:1 produce:1 generating:1 object:1 derive:1 develop:1 ij:2 implies:2 direction:1 stochastic:2 human:2 generalization:4 timeaverage:1 yij:9 frontier:1 normal:2 mapping:3 predict:2 commonality:1 early:1 estimation:2 weighted:2 breiman:3 consistently:1 likelihood:6 dependent:1 entire:3 hidden:3 pixel:2 fogelman:1 among:1 classification:2 herault:1 development:1 softmax:1 wadsworth:1 once:1 f3:1 unsupervised:2 report:1 spline:1 simplify:1 composed:1 preserve:1 national:1 ourselves:1 friedman:4 investigate:1 adjust:1 mixture:1 light:2 partial:3 respective:1 tree:17 theoretical:1 column:1 modeling:1 earlier:2 halted:1 restoration:1 cost:1 entry:1 hundred:1 proximal:1 adaptively:1 density:4 international:1 ie:1 probabilistic:2 rounded:3 michael:1 again:1 cognitive:1 expert:42 derivative:5 american:1 li:2 north:1 later:1 sine:1 multiplicative:1 parallel:1 poplar:3 yield:1 correspond:2 yellow:1 identification:2 produced:2 ed:1 associated:3 mi:2 bridle:2 stop:1 auditory:1 massachusetts:1 birch:3 color:1 back:2 appears:1 higher:1 supervised:3 mar:7 nonlinear:4 overlapping:1 propagation:3 logistic:1 effect:2 laboratory:1 preston:3 ll:1 x5:1 width:1 cosine:1 stone:2 percent:3 image:2 discovers:2 functional:2 volume:1 extend:1 belong:1 interpretation:1 significant:1 measurement:1 diversify:2 cambridge:1 similarly:2 had:2 apex:1 robot:1 acute:3 surface:1 base:1 posterior:7 multivariate:2 awarded:1 diversification:1 verlag:1 binary:3 yi:1 additional:1 forty:1 maximize:1 ii:2 adapt:1 characterized:1 y22:1 equally:1 award:1 involving:1 regression:4 neuro:1 multilayer:3 essentially:3 iteration:1 achieved:1 ion:1 fine:2 fifty:1 ascent:2 cart:13 tend:1 jordan:8 feedforward:1 split:4 fit:3 architecture:31 competing:2 restrict:1 tradeoff:1 bottleneck:1 york:1 deep:1 useful:1 dark:1 locally:1 ph:3 generate:1 zj:1 nsf:1 group:1 four:7 backward:1 sum:2 convert:1 compete:1 run:1 fourth:1 throughout:1 decision:1 comparable:3 hi:2 yielded:1 bp:2 generates:2 aspect:2 argument:1 department:1 developing:1 structured:1 according:1 truncate:1 conjugate:1 across:1 partitioned:1 appealing:1 making:1 computationally:1 equation:5 discus:1 serf:1 generalizes:2 apply:1 eight:1 hierarchical:13 worthwhile:1 appropriate:3 robustness:1 top:2 cf:5 classical:2 move:1 arrangement:1 dependence:1 enhances:1 gradient:5 separate:6 mapped:2 simulated:1 atr:1 sensible:3 toward:1 length:1 relationship:1 acquire:1 olshen:2 robert:1 favorably:1 unknown:2 perform:1 twenty:1 finite:1 hinton:4 arbitrary:1 required:1 discontinuity:3 superordinate:1 below:1 exemplified:2 pattern:5 perception:1 program:1 ia:1 suitable:1 natural:3 jld:1 residual:1 arm:1 scheme:1 technology:1 lk:1 extract:1 auto:1 prior:2 acknowledgement:1 relative:6 interesting:1 foundation:1 iowa:1 degree:2 affine:7 row:1 supported:1 arriving:1 side:1 perceptron:1 institute:1 face:1 differentiating:1 soulie:1 depth:1 valid:1 forward:1 collection:2 adaptive:11 author:1 far:1 approximate:1 global:5 overfitting:1 assumed:2 grayscale:1 iterative:1 why:1 table:3 learn:3 associator:1 ca:1 ignoring:1 whole:1 edition:1 tl:3 sub:1 position:2 candidate:1 third:2 learns:5 ix:1 young:1 ximum:1 specific:1 gating:22 normalizing:1 margin:1 entropy:3 led:1 eij:1 explore:1 visual:1 acquiring:1 springer:1 nested:4 ma:2 conditional:3 sized:1 acceleration:2 rbf:2 exposition:1 change:1 principal:1 total:1 specie:2 e:1 siemens:1 perceptrons:1 internal:1 support:1 latter:1 arises:1 modulated:1 investigator:1 incorporate:1 |
4,576 | 5,140 | Documents as multiple overlapping windows into a
grid of counts
Alessandro Perina1
Nebojsa Jojic1
1
Manuele Bicego2
Andrzej Turski1
Microsoft Corporation, Redmond, WA
2
University of Verona, Italy
Abstract
In text analysis documents are often represented as disorganized bags of words;
models of such count features are typically based on mixing a small number of
topics [1, 2]. Recently, it has been observed that for many text corpora documents
evolve into one another in a smooth way, with some features dropping and new
ones being introduced. The counting grid [3] models this spatial metaphor literally: it is a grid of word distributions learned in such a way that a document?s own
distribution of features can be modeled as the sum of the histograms found in a
window into the grid. The major drawback of this method is that it is essentially
a mixture and all the content must be generated by a single contiguous area on
the grid. This may be problematic especially for lower dimensional grids. In this
paper, we overcome this issue by introducing the Componential Counting Grid
which brings the componential nature of topic models to the basic counting grid.
We evaluated our approach on document classification and multimodal retrieval
obtaining state of the art results on standard benchmarks.
1
Introduction
A collection of documents, each consisting of a disorganized bag of words is often modeled
compactly using mixture or admixture models, such as Latent Semantic Analysis (LSA) [4] and
Latent Dirichlet Allocation (LDA) [1]. The data is represented by a small number of semantically
tight topics, and a document is assumed to have a mix of words from an even smaller subset of these
topics. There are no strong constraints in how the topics are mixed [5].
Recently, an orthogonal approach emerged: it has been observed that for many text corpora
documents evolve into one another in a smooth way, with some words dropping and new ones
being introduced. The counting grid model (CG) [3] takes this spatial metaphor ? of moving
through sources of words and dropping and picking new words ? literally: it is multidimensional
grid of word distributions, learned in such a way that a document?s own distribution of words can
be modeled as the sum of the distributions found in some window into the grid. By using large
windows to collate many grid distributions from a large grid, CG model can be a very large mixture
without overtraining, as these distributions are highly correlated. LDA model does not have this
benefit, and thus has to deal with a smaller number of topics to avoid overtraining.
In Fig.1a we show an excerpt of a grid learned from cooking recipes from around the world. Each
position in the grid is characterized by a distribution over the words in a vocabulary and for each
position we show the 3 words with higher probability whenever they exceed a threshold. The shaded
positions, are characterized by the presence, with a non-zero probability, of the word ?bake?1 . On
the grid we also show the windows W of size 4 ? 5 for 5 recipes. Nomi (1), an Afghan egg-based
bread, is close to the recipe of the usual pugliese bread (2), as indeed they share most of the ingredients and procedure and their windows largely overlap. Note how moving from (1) to (2) the word
1
Which may or may not be in the top three
1
a)
grain
rice
cook
i
cutlet
(1)
pour
gently
spoonful
carefully
pour
panful
bread
[...]
crumb
panful
side
salt
place
removable
chives
removable
place
(2)
set
aside
center
zest
pudding
half
clear
dish
run
back
pick
lift
full
persian
bottom
wooden
spatula
quickly
procedure
crumb
bread
toothpick
mixture
pour
side
preheat
degree
side
method
usual
however
indian
(4)
useful
given
section
dosa
omelet
slide
invert
upside
crumb
crusty
beaten
altogether
mixture
oven
bake
ovenproof
preheat
oven
oven
center
arrange
preheat
mix
completion
cool
may
reheat
vary
round
nonstick
spread
good
want
doesnt
way
indian
try
larger
brown
dish
minute
type
cooker
resultant
start
change
sit
egg
middle
preheat
bake
oil
dish
pour
pattern
cool
insert
tray
slip
lightly
sheet
rack
preheat
baguette
know
going
look
needed
completion
least
pancake
texture
cylinder
spatula
second
one
naan
turn
sheet
additional
(5)
because
beings
dont
generation
they
kept
store
tasty
normal
biscotti
griddle
always
log
apartment
griddle
handful
incorporate
beat
bowl
turn
mixer
work
bowl
rise
bulk
spray
cornmeal
surface
smoothe
knead
loaves
double
sprinkle
sharp
resemblance
surface
loose
long
moist
grease
pizza
loaf
form
[...]
b)
Noni Afghan Bread
Brown Bread
Ceasar Salad
tarka
only
excellence
quite
container
airtight
crepe
airtight
long
cookie
biscuit
pretzel
handful
roti
sticky
dough
knead
board
elastic
ball
yeast
rise
machine
bread
feel
white
brandy
pour
electric
meringue
electric
extract
mixer
almond
peach
sift
grease
diameter
round
shape
brush
divide
divide
shape
circle
dough
cloth
warm
ball
palm
starter
size
chocolate
gradual
peak
mascarpone
chocolate
yolk
rum
beat
vanilla
speeds
wire
rack
cinnamon
sugar
paper
parchment
golden
beat
fol
slowly
granulated
cake
egg
springform
butter
rind
confectioners alternative
fold
pressing
moisten
egg
together
sheet
roll
rectangle
cut
egg
pressing
sheet
border
seal
edge
sheet
surface
towel
damp
round
center
useful
equal
bun
desirable
amount
thoroughly
center
form
together
form
left
start
inch
(3)
Pizza di Napoli
pastry
bit
kitchen
ready
Wj
edge
fold
seal
square
triangl
raviol
work
wrapper
lin
amount
feeding
neat
Grecian Chicken
Gyros Pizza
'dough'
'roll'
'ball'
'shape'
'yeast'
'knead'
'rise'
'bread'
'egg'
'dough'
'roll'
'yeast'
'knead'
'shape'
'desirable'
'water'
'divide'
'keep'
'water'
'aside'
'add'
'smoothe'
'minute'
'lukewarm'
'remain'
'fry'
'sauce'
'deep'
'oil'
'hot'
'golden'
'mix'
'lettuce'
'salad'
'slice'
'garnish'
'dressings'
'beans'
'mix'
'cheese'
'place'
'melt'
'basil'
'cover'
'bag'
'broil'
[...]
'chicken'
'marinade'
'shallow'
'hot'
'coat'
'refrigeration'
'heat'
'crust'
'evenly'
'spread'
'edge'
'pressing'
'center'
'place'
'feta'
'mixture'
'useful'
Figure 1: a) A particular of a E = 30 ? 30 componential counting grid ?i learned over a corpus
of recipes. In each cell we show the 0-3 most probable words greater than a threshold. The area
in shaded red has ?(0 bake0 ) > 0. b) For 6 recipes, we show how their components are mapped
onto this grid. The ?mass? of each component (e.g., ? see Sec.2) is represented with the window
thickness.
For each component c = j in position j, we show the words generated in each window
P
cz ? j2Wi ?j (z)
?egg? is dropped. Moving to the right we encounter the basic pizza (3) whose dough is very similar to the bread?s. Continuing to the right words often associated to desserts like sugar, almond, etc
emerge. It is not surprising that baked desserts such as cookies (4), and pastry in general, are mapped
here. Finally further up we encounter other desserts which do not require baking, like tiramisu (5),
or chocolate crepes. This is an example of a?topical shift?; others appear in different portions of the
full grid which is included in the additional material.
The major drawback of counting grids is that they are essentially a mixture model, assuming only
one source for all features in the bag and the topology of the space highly constrains the document
mappings resulting in local minima or suboptimal grids. For example, more structured recipes like
Grecian Chicken Gyros Pizza or Tex-Mex pizza would have very low likelihood, as words related to
meat, which is abundant in both, are hard to generate in the baking area where the recipes would
naturally goes.
As first contribution we extend here the counting grid model so that each document can be represented by multiple latent windows, rather than just one. In this way, we create a substantially
more flexible admixture model, the componential counting grid (CCG), which becomes a direct
generalization of LDA as it does allow multiple sources (e.g., the windows) for each bag, in a mathematically identical way as LDA. But, the equivalent of LDA topics are windows in a counting grid,
which allows the model to have a very large number of topics that are highly related, as shift in the
grid only slightly refines any topic.
Starting from the same grid just described, we recomputed the mapping of each recipe which now
can be described by multiple windows, if needed. Fig. 1b shows mappings for some recipes. Also
the words generated in each component are shown. The three pizzas place most of the mass in the
same area (dough), but the words related to the topping are borrowed from different areas. Another
example is the Caesar salad which have a component in the salad/vegetable area, and borrows the
2
croutons from the bread area.
By observing Fig.1b, one can also notice how the embedding produced by CCGs yields to a similarity measure based on the grid usage of each sample. For example, words relative to the three
pizzas are generated from windows that overlap, therefore they share words usage and thus they are
?similar?. As second contribution we exploited this fact to define a novel generative kernel, whose
performance largely outperformed similar classification strategies based on LDA?s topic usage [1,2].
We evaluated componential counting grids and in particular the kernel, on the 20-Newsgroup dataset
[6], on a novel dataset of recipes which we will make available to the community, and on the recent ?Wikipedia picture of the day? dataset [7]. In all the experiments, CCGs set a new state of the
art. Finally, for the first time we explore visualization through examples and videos available in the
additional material.
2
Counting Grids and Componential Counting Grids
Pick a location within
the window W
Pick a window W
from
The basic Counting Grid ?i is a set of distributions over the vocabulary on the N -dimensional
b)
c)
w
discrete grid indexed by i where each id 2 a)
U
T
[1 . . . Ed ] and E describes the extent of the
counting grid in d dimensions. The index z inN
dexes a particular word in the vocabulary z =
l
n
[1 . . . Z] being Z the size of the vocabulary. For
ln=(5,3)
w
example, ?i (0 P izza0 ) is the probability of the
word ?Pizza? at the
P location i. Since ? is a grid
kn
kn=ln +(0,3)
of distributions, z ?i (z) = 1 everywhere on
the grid. Each bag of words is represented by a
k
list of words {wt }Tt=1 and each word wnt takes
wn
a value between 1 and Z. In the rest of the pawn = ?Pizza?
per, we will assume that all the samples have N
words.
Counting Grids assume that each bags follow Figure 2: a) Plate notation representing the CCG
a word distribution found somewhere in the model. b) CCG generative process for one word:
counting grid; in particular, using windows of Pick a window from ?, Pick a position within the
dimensions W, a bag can be generated by first window, Pick a word. c) Illustration of U W and
averaging all counts in the window Wi starting ?W
? relative to the particular ? shown in plate b).
at grid location i and extending in each direcP
tion d by Wd grid positions to form the histogram hi (z) = Q 1Wd j2Wi ?j (z), and then generating
d
a set of features in the bag (see Fig.1a where we used a 3 ? 4 window). In other words, the position
of the window i in the grid is a latent variable given which we can write the probability of the bag
as
Y
Y
X
1
Q
p({w}|i) =
hi,z =
?
?j (wn ) ,
d Wd
n
n
n
Z = |Vocabulary|
Pick a word from the
distribution k
j2Wi
Relaxing the terminology, E and W are referred to as, respectively, the counting grid and the window size. The ratio of the two volumes, ?, is called the capacity of the model in terms of an
equivalent number of topics, as this is how many non-overlapping windows can be fit onto the grid.
Finally, with Wi we indicate the particular window placed at location i.
Componential Counting Grids As seen in the previous section, counting grids generate words
from a distribution in a window W , placed at location i in the grid. Windows close in the grid
generate similar features because they share many cells: As we move the window on the grid,
some new features appear while others are dropped. On the other hand componential models, like
[1], represent the standard way of modeling of text corpora. In these models each feature can be
generated by a different source or topic, and documents are then seen as admixtures of topics.
Componential counting grids get the best of both worlds: being based on the counting grid geometry
they capture smooth shifts of topics, plus their componential nature, which allows documents to be
generated by several windows (akin to LDA?s topics). The number of windows need not be specified
a-priori.
Componential Counting Grids assumes the following generative process (also illustrated by Fig.2b.)
for each document in a corpus:
3
1. Sample the multinomial over the locations ? ? Dir(?)
2. For each of the N words wn
a) Choose a at location ln ? M ultinomial(?) for a window of size W
b) Choose a location within the window Wln ; kn
c) Choose a word wn from ?kn
As visible, each word wn is generated from a different window, placed at location ln , but the choice
of the window follows the same prior distributions ? for all words. It worth noticing that when
W = 1 ? 1, ln = kn and the model becomes Latent Dirichlet Allocation.
The Bayesian network is shown in Fig.2a) and it defines the following joint probability distribution
YXX
P =
p(wn |kn , ?) ? p(kn |ln ) ? p(ln |?) ? p(?|?)
(1)
t,n ln
kn
where p(wn = z|kn = i, ?) = ?i (z) is a multinomial over the vocabulary, p(kn = i|ln = k) =
1
U W (i k) is a distribution over the grid locations, with U W uniform and equal to ( |W
| ) in the
upper left window of size W and 0 elsewhere (See Fig.2c). Finally p(ln |?) = ?(l) is the prior
distribution over the windows location, and p(?|?) = Dir(?; ?) is a Dirichlet distribution of
parameters ?.
Since the posterior distribution p(k, l, ?|w, ?, ?) is intractable for exact inference, we learned the
model using variational inference [8].
We firstly
the posterior distributions q, approximating the true posterior as q t (k, l, ?) =
Q introduced
t
t
t
q (?) ? n q (kn ) ? q (ln ) being q(kn ) and q(ln ) multinomials over the locations, and q(?) a Dirac
?
function centered at the optimal value ?.
Then by bounding (variationally) the non-constant part of log P , we can write the negative free
energy F, and use the iterative variational EM algorithm to optimize it.
?
X?X X
log P
F=
q t (kn ) ? q t (ln ) ? log ?kn (wn ) ? U W (kn ln ) ? ?ln ? p(?|?) H(q t )
t
n
ln ,kn
where H(q) is the entropy of the distribution q.
Minimization of Eq. 2 reduces in the following update rules:
?X
q t (kn = i) / ?i (wn ) ? exp
q t (ln = j) ? log U W (i
(2)
j)
ln =j
q t (ln = i)
?t (i)
?i (z)
/
?t (i) ? exp
/
?i
/
XX
t
1+
?X
kn =j
X
q t (kn = j) ? log U W (j
q t (ln = i)
i)
?
?
(3)
(4)
(5)
n
q t (kn = i)[wn =z]
(6)
n
where [wn = z] is an indicator function, equal to 1 when wn is equal to z. Finally, the parameters ?
of the Dirichlet prior can be either kept fixed [9] or learned using standard techniques [10].
The minimization procedure described by Eqs.3-6 can be carried out efficiently in O(N log N )
time using FFTs [11].
Some simple mathematical manipulations of Eq.1 can yield to a speed up. In fact, from Eq.1 one
can marginalize the variable ln
Y X
P =
p(wn |kn = j) ? p(kn = j|ln = i) ? p(ln = i|?) ? p(?|?)
t,n ln =i,kn =j
=
Y
X
t,n ln =i,kn =j
=
YX
t,n kn =j
?j (wn ) ? U W (j
?j (wn ) ?
?X
ln =i
U W (j
i) ? ?(i) ? p(?(i)|?i )
?
YX
i) ? ?(i) ? p(?(i)|?i ) =
?j (wn ) ? ?W
(7)
i)
? t ? p(?(i)|?
t,n kn =j
4
W
where ?W
with ?. The
? is a distribution over the grid locations, equal to the convolution of U
update for q(k) becomes
q t (kn = i) / ?i (wn ) ? ?W
(8)
? (i)
In the same way, we can marginalize the variable kn
?X
?
YX
YX
P =
?(i) ?
U W (j i) ? ?j (wn ) ? p(?(i)|?i ) =
?(i) ? hi (wn ) ? p(?(i)|?i ) (9)
t,n ln =i
t,n ln =i
kn =j
to obtain the new update for q t (ln )
q t (ln = i) / hi (wn ) ? ?t (i)
(10)
where hi is the feature distribution in a window centered at location i, which can be efficiently
computed in linear time using cumulative sums [3]. Eq.10 highlights further relationships between
CCGs and LDA: CCGs can be thought as an LDA model whose topics live on the space defined
by the counting grids geometry. The new updates for the cell distribution q(k) and the window
distribution q(l), require only a single convolution and, more importantly, they don?t directly depend
on each other. The model becomes more efficient and has a faster convergence. This is very critical
especially when we are analyzing big text corpora.
The most similar generative model to CCG comes from the statistic community. Dunson et al. [12]
worked on sources positioned in a plane at real-valued locations, with the idea that sources within
a radius would be combined to produce topics in an LDA-like model. They used an expensive
sampling algorithm that aimed at moving the sources in the plane and determining the circular
window size. The grid placement of sources of CCG yields much more efficient algorithms and
denser packing.
2.1 A Kernel based on CCG embedding
Hybrid generative discriminative classification paradigms have been shown to be a practical and
effective way to get the best of both worlds in approaching classification [13?15]. In the context of
topic models a simple but effective kernel is defined as the product of the topic proportions of each
document. This kernel measures the similarity between topic usage of each sample and it proved to
be effective on several tasks [15?17]. Despite CCG?s ?s, the locations proportions, can be thought
as LDA?s, we propose another kernel, which exploits exactly the same geometric reasoning of the
underlying generative model. We observe in fact that by construction, each point in the grid depends
by its neighborhood, defined by W and this information is not captured using ?, but using ?W
?
which is defined by spreading ? in the appropriate window (Eq.7).
More formally, given two samples t and u, we define a kernel based on CCG embedding as
X
X
W
W
K(t, u) =
S(?W
U W (i j) ? ?(j)
(11)
? t (i), ?? u (i)) where ?? (i) =
i
j
where S(?, ?) is any similarity measure which defines a kernel.
In our experiments we considered the simple product, even if other measures, such as histogram
intersection can be used. The final kernel turns to be (? is the dot-product)
X
W
W
W
KLN (t, u) =
?W
(12)
? t (i) ? ?? u (i) = T r ?? t ? ?? u
i
3
Experiments
Although our model is fairly simple, it is still has multiple aspects that can be evaluated. As a
generative model, it can be evaluated in left-out likelihood tests. Its latent structure, as in other generative models, can be evaluated as input to classification algorithms. Finally, as both its parameters
and the latent variables live in a compact space of dimensionality and size chosen by the user, our
learning algorithm can be evaluated as an embedding method that yields itself to data visualization
applications. As the latter two have been by far the more important sets of metrics when it comes to
real-world applications, our experiments focus on them.
In all the tests we considered squared grids of size E = [40 ? 40, 50 ? 50, . . . , 90 ? 90] and windows of size W = [2 ? 2, 4 ? 4, . . . , 8 ? 8]. A variety of other methods are occasionally compared
to, with slightly different evaluation methods described in individual subsections, when appropriate.
5
a) ?Same?-20 NewsGroup Results
c) Wikipedia Picture of the Day Results
90
80
75
70
Correspondence LDA
LDA + Discr. Classifier
Multimodal Random Field model
Classification Accuracy
85
Componential Counting Grid
1
Capacity / No. Topics
65
10
1
b) Mastercook Recipes Results
10
0.8
2
Componential Counting Grid ( )
Componential Counting Grid ( )
LDA ( )
Counting Grid (q(l) )
0.6
80
50
40
30
20
0.4
Error rate
60
Classification Accuracy
70
0.2
Capacity / No. Topics
10
1
10
0
0
0.2
0.4
0.6
0.8
1
Percentage
2
Figure 3: a-b) Results for the text classification tasks. The Mastercook recipes dataset is available
on www.alessandroperina.com. We represented the grid size E using gray levels (see the
text). c) Wikipedia Picture of the day result: average Error rate as a function of the percentage of
the ranked list considered for retrieval. Curves closer to the axes represents better performances.
Document Classification We compared componential counting grids (CCGs) with counting grids
[3] (CGs), latent Dirichlet allocation [1] (LDA) and the spherical admixture model [2] (SAM), following the validation paradigm previously used in [2, 3].
Each data sample consists of a bag of words and a label. The bags were used without labels to train
a model that capture covariation in word occurrences, with CGs mostly modeling thematic shifts,
LDA and SAM modeling topic mixing and CCGs both aspects. Then, the label prediction task is
performed in a 10-folds crossevaluation setting, using the linear kernel presented in Eq.12 which
for LDA reduces in using a linear kernel on the topic proportions. To show the effectiveness of the
spreading in the kernel definition, we also report results by employing CCG?s ?s instead of ?W
? . For
CGs we used the original strategy [3], Nearest Neighbor in the embedding space, while for SAM
we reported the results from the original paper. To the best of our knowledge the strategies just described, based on [3] and [2], are two of the most effective methods to classify text documents. SAM
is characterized by the same hierarchical nature of LDA, but it represents bags using directional distributions on a spherical manifold modeling features frequency, presence and absence. The model
captures fine-grained semantic structure and performs better when small semantic distinctions are
important. CCGs map documents on a probabilistic simplex (e.g., ?) and for W > [1 ? 1] can be
thought as an LDA model whose topics, hi , are much finer as computed from overlapping windows
(see also Eq.10); a comparison is therefore natural.
As first dataset we considered the CMU newsgroup dataset2 . Following previous work [2, 3, 6]
we reduced the dataset into subsets with varying similarities among the news groups; news20-different, with posts from rec.sport.baseball, sci.space and alt.atheism,
news-20-similar, with posts from rec.talk.baseball, talk.politics.gun and
talk.politics.misc and news-20-same, with posts from comp.os.ms-windows,
comp.windows.x and comp.graphics. For the news-20-same subset (the hardest), in Fig.3a
we show the accuracies of CCGs and LDA across the complexities. On the x-axis we have the different model size, in term of capacity ?, whereas in the y-axis we reported the accuracy. The same
? can be obtained with different choices of E and W therefore we represented the grid size E using
gray levels, the lighter the marker the bigger the grid. The capacity ? is roughly equivalent to the
number of LDA topics as it represents the number of independent windows that can be fit in the grid
and we compared the with LDA using this parallelism [18].
Componential counting grids outperform Latent Dirichlet Allocation across all the spectrum and the
accuracy regularly raises with ? independently from the Grid size3 . The priors helped to prevent
overtraining for big capacities ?. When using CCG?s ?s to define the kernel, as expected the accu2
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.html
This happens for ?reasonable? window sizes. For small windows (e.g, 2 ? 2), the model doesn?t have
enough overlapping power and performs similarly a mixture model.
3
6
Table 1: Document classification. The improvement on Similar and Same are statistically significant. The accuracies for SAM are taken from [2] and they represent the best results obtained
across the choice of number of topics. BOW stands for classification with a linear SVM on the
counts matrices.
Dataset
CCG
2D CG 3D CG
LDA
BOW
SAM?
[3]
[3]
[1]
[2]
Different 96,49% 96,51% 96,34% 91,8% 91,43% 94,1%
Similar
92,81% 89,72% 90,11% 85,7% 81,52% 88,1%
Same
83,63% 81,28% 81,03% 75,6% 71,81% 78,1%
a)
keo
lettuce
beds
thin
sweet
optional
msg
condiment
cuisine
thin
beef
couple
pound
asian
beef
root
paste
snow
wok
stirfry
rib
sauce
wok
ginger
table
tom
curry
soupe
broth
ladleful
b)
soupe
broccoli
grass
lemongrass
shrimp
shallot
mortar
pestle
soupe
tureen
nectar
intend
roux
devein
chili
sambal
gentle
bringing
boil
cornstarch
wok
tbs
mix
cornstarch
light
sherry
coat
apply
piece
chicken
bite
devein
pink
chicken
stock
remain
carrot
celery
removable
add
fryer
boil
simmer
bringing
cover
reduce
bringing
boil
sauce
aside
tawa
sizzler
chicken
soya
chestnut
noodle
done marination shallow
white
cut
scallion
ginger
set
aside
garlic
chow
shoot
peppercorn
chili
coconut
galangal
stirfry
vein
thai
prik
aromatic
tofu
pork
peanut
piece
panfry
breast
wings
skin
bone
chicken
tongue
pot
slotted
sauerkraut
save
coal
baste
hour
cheesecloth
tie
pork
meat
drip
skim
duck
fat
removable
return
refrigeration
brush
piece
cut
turnip
wing
render
excessive
large
pot
kidney
removable
fat
brown
cook
lower
grill
afghan
provide
marinade
overnight
marinade
rib
hare
muslin
skewer
cavity
carcasses
hour
sausage
case
slightly
pumpkin
goose
cassoulet
salt
cook
mixture
two
curd
cloth
paneer
charcoal
outside
thread
least
wood
turn
secure
preferable
kabob
turn
stuff
together
spice
smoker
cube
push
crock
mix
spice
tava
i
put
pour
leave
leave
length
cube
away
terrine
inch
long
order
pack
length
crockpot
inch
wide
mix
ground
grinder
ground
ketchup
roughly
freshly
chop
tightly shredding
jack
refry
lamb
rest
electric
six
burrito
mix
owner
saint
c) Zoom
Figure 4: A simple interface built upon the word embedding ?.
racy dropped (blue dots in Fig.3).
Results for all the datasets and for a variety of methods, are reported in Tab.1 where we employed
10% of the training data as validation set to pick a complexity (a different complexity have been
chosen for each fold). As visible, CCG outperforms other models, with a larger margin on the more
challenging same and similar datasets, where we would indeed expect that quilting the topics to
capture fine-grained similarities and differences would be most helpful.
As second dataset, we downloaded 10K Mastercook recipes, which are freely available on the web
in plain text format. Then we extracted the words of each recipe from its ingredients and cooking
instructions and we used the origin of the recipe, to divide the dataset in 15 classes4 . The resulting
dataset has a vocabulary size of 12538 unique words and a total of ?1M tokens.
To classify the recipes we used 10-fold crossevaluation with 5 repetitions, picking 80 random recipes
per-class for each repetition. Classification results are illustrated in Fig. 3b. As for the previous test,
CCG classification accuracies grows regularly with ? independently from the grid size E. Componential models (e.g., LDA and CCGs) performed significantly better as to correctly classify the
origin of a recipe, spice palettes, cooking style and procedures must be identified. For example while
most Asian cuisines uses similar ingredients and cooking procedures they definitely have different
spice palettes. Counting Grids, being mixtures, cannot capture that as they map a recipe in a single
location which heavily depends on the ingredients used. Among componential models, CCGs work
the best.
Multimodal Retrieval We considered the Wikipedia Picture of the Day dataset [7], where the task
is multi-modal image retrieval: given a text query, we aim to find images that are most relevant to it.
To accomplish this, we firstly learned a model using the visual words of the training data {wt,V },
obtaining ?t , ?iV . Then, keeping ?t fixed and iterating the M-step, we embedded the textual words
{wt,T } obtaining ?iW . For each test sample we inferred the values of ?t,V and ?t,W respectively
from ?iV and ?iW and we used Eq.12 to compute the retrieval scores. As in [7] we split the data in 10
4
We considered the following cuisines: Afghan, Cajun, Chinese, English, French, German, Greek, Indian,
Indonesian, Italian, Japanese, Mexican, Middle Eastern, Spanish and Thai.
7
folds and we used a validation set to pick a complexity. Results are illustrated in Fig.3c. Although
we used this simple procedure without directly training a multimodal model, CCGs outperform
LDA, CorrLDA [19] and the multimodal document random field model presented in [7] and sets a
new state of the art.
The area under the curve (AUC) for our method is 21.92?0.6, while for [7] is 23.14?1.49 (Smaller
values indicate better performance). Counting Grids and LDA both fail with AUCs around 40.
Visualization Important benefits of CCGs are that 1) they lay down sources ?i on a 2-D dimensional grid, which are ready for visualization, and 2) they enforce that close locations generate
similar topics, which leads to smooth thematic shifts that provide connections among distant topics
on the grid. This is very useful for sensemaking [20]. To demonstrate this we developed a simple
interface. A particular is shown in Fig.4b, relative to the extract of the counting grid shown in Fig.4a.
The interface is pannable and zoomable and, at any moment, on the screen only the top
N = 500 words are shown. To define the importance of each word in each position
we weighted ?i (z) with the inverse document frequency. Fig.4b shows the lowest level of
zoom: only words from few cells are visible and the font size resembles their weight. A
user can zoom in to see the content of particular cells/areas, until he reaches the highest level of zoom when most of the words generated in a position are visible, Fig.4c.
We also propose a simple search strategy: once
a keyword z? is selected, each word z in each position j, is weighted with a word and position
DEEP FRY
dependent weights. The first is equal to 1 if z
co-occur with z? in some document, and 0 otherwise, while the latter is the sum of ?i (?
z ) in all the
js given that there exists a window Wk that conSTIR FRY
tains both i and j. Other strategies are of course
FRY
possible. As result, this strategy highlights some
areas and words, related to z? on the grid and in
each areas words related (similar topic) to z? appears. Interestingly, if a search term is used in
different contexts, few islands may appear on the
grid. For example Fig.5 shows the result of the
search for z? =?fry?: The general frying is well
separated from ?deep frying? and ?stir frying?
Figure 5: Search result for the word ?fry?.
which appears at the extremes of the same island. Presenting search results as islands on a
2-dimensional grid, apparently improves the standard strategy, a linear list of hits, in which recipes
relative to the three frying styles would have be mixed, while tempura have little to do with pan fried
noodles.
4
Conclusion
In this paper we presented the componential counting grid model ? which bridges the topic model
and counting grid worlds ? together with a similarity measure based on it. We demonstrated that
the hidden mapping variables associated with each document can naturally be used in classification
tasks, leading to the state of the art performance on a couple of datasets.
By means of proposing a simple interface, we have also shown the great potential of CCGs to visualize a corpora. Although the same holds for CGs, this is the first paper that investigate this aspect.
Moreover CCGs subsume CGs as the components are used only when needed. For every restart, the
grids qualitatively always appeared very similar, and some of the more salient similarity relationships were captured by all the runs. The word embedding produced by CCG has also advantages
w.r.t. other Euclidean embedding methods such as ISOMAP [21], CODE [22] or LLE [23], which
are often used for data visualization. In fact CCG?s computational complexity is linear in the dataset
size, as opposed to the quadratic complexity of [21, 21?23] which all are based on pairwise distances. Then [21, 23] only embed documents or words while CG/CCGs provide both embeddings.
Finally as opposed to previous co-occurrence embedding methods that consider all pairs of words,
our representation naturally captures the same word appearing in multiple locations where it has
a different meaning based on context. The word ?memory? in the Science magazine corpus is a
striking example (memory in neruoscience, memory in electronic devices, immunologic memory).
8
References
[1] Blei, D., Ng, A., Jordan, M.: Latent dirichlet allocation. Journal of machine Learning Research 3 (2003)
993?1022
[2] Reisinger, J., Waters, A., Silverthorn, B., Mooney, R.J.: Spherical topic models. In: ICML ?10: Proceedings of the 27th international conference on Machine learning. (2010)
[3] Jojic, N., Perina, A.: Multidimensional counting grids: Inferring word order from disordered bags of
words. In: Proceedings of conference on Uncertainty in artificial intelligence (UAI). (2011) 547?556
[4] Hofmann, T.: Unsupervised learning by probabilistic latent semantic analysis. Machine Learning Journal
42 (2001) 177?196
[5] Blei, D.M., Lafferty, J.D.: Correlated topic models. In: NIPS. (2005)
[6] Banerjee, A., Basu, S.: Topic models over text streams: a study of batch and online unsupervised learning.
In: In Proc. 7th SIAM Intl. Conf. on Data Mining. (2007)
[7] Jia, Y., Salzmann, M., Darrell, T.: Learning cross-modality similarity for multinomial data. In: Proceedings of the 2011 International Conference on Computer Vision. ICCV ?11, Washington, DC, USA, IEEE
Computer Society (2011) 2407?2414
[8] Neal, R.M., Hinton, G.E.: A view of the em algorithm that justifies incremental, sparse, and other variants.
Learning in graphical models (1999) 355?368
[9] Asuncion, A., Welling, M., Smyth, P., Teh, Y.W.: On smoothing and inference for topic models. In: In
Proceedings of Uncertainty in Artificial Intelligence. (2009)
[10] Minka, T.P.: Estimating a Dirichlet distribution. Technical report, Microsoft Research (2012)
[11] Frey, B.J., Jojic, N.: Transformation-invariant clustering using the em algorithm. IEEE Trans. Pattern
Anal. Mach. Intell. 25 (2003) 1?17
[12] Dunson, D.B., Park, J.H.: Kernel stick-breaking processes. Biometrika 95 (2008) 307?323
[13] Perina, A., Cristani, M., Castellani, U., Murino, V., Jojic, N.: Free energy score spaces: Using generative
information in discriminative classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 34 (2012) 1249?1262
[14] Raina, R., Shen, Y., Ng, A.Y., Mccallum, A.: Classification with hybrid generative/discriminative models.
In: In Advances in Neural Information Processing Systems 16, MIT Press (2003)
[15] Jebara, T., Kondor, R., Howard, A.: Probability product kernels. J. Mach. Learn. Res. 5 (2004) 819?844
[16] Bosch, A., Zisserman, A., Mu?noz, X.: Scene classification using a hybrid generative/discriminative
approach. IEEE Trans. Pattern Anal. Mach. Intell. 30 (2008) 712?727
[17] Bicego, M., Lovato, P., Perina, A., Fasoli, M., Delledonne, M., Pezzotti, M., Polverari, A., Murino, V.:
Investigating topic models? capabilities in expression microarray data classification. IEEE/ACM Trans.
Comput. Biology Bioinform. 9 (2012) 1831?1836
[18] Perina, A., Jojic, N.: Image analysis by counting on a grid. In: Proceedings of IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (CVPR). (2011) 1985?1992
[19] Blei, D.M., Jordan, M.I.: Modeling annotated data. In: Proceedings of the 26th annual international ACM
SIGIR conference on Research and development in informaion retrieval. SIGIR ?03 (2003) 127?134
[20] Thomas, J., Cook, K.: Illuminating the Path: The Research and Development Agenda for Visual Analytics. IEEE Press (2005)
[21] Tenenbaum, J.B., de Silva, V., Langford, J.C.: A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 290 (2000) 2319?2323
[22] Globerson, A., Chechik, G., Pereira, F., Tishby, N.: Euclidean embedding of co-occurrence data. Journal
of Machine Learning Research 8 (2007) 2265?2295
[23] Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. SCIENCE
290 (2000) 2323?2326
9
| 5140 |@word middle:2 kondor:1 proportion:3 verona:1 seal:2 almond:2 bun:1 instruction:1 gradual:1 chili:2 pick:9 reduction:2 wrapper:1 rind:1 moment:1 score:2 slotted:1 yxx:1 salzmann:1 document:24 shrimp:1 interestingly:1 outperforms:1 wd:3 cooker:1 surprising:1 com:1 must:2 grain:1 refines:1 distant:1 visible:4 shape:4 hofmann:1 update:4 grass:1 aside:4 intelligence:2 selected:1 cook:4 device:1 nebojsa:1 half:1 generative:11 mccallum:1 fried:1 plane:2 blei:3 location:19 firstly:2 bioinform:1 mathematical:1 direct:1 spray:1 consists:1 owner:1 coal:1 excellence:1 pairwise:1 news20:2 indeed:2 expected:1 roughly:2 pour:6 multi:1 spherical:3 tex:1 metaphor:2 little:1 window:47 becomes:4 project:1 xx:1 notation:1 underlying:1 estimating:1 mass:2 moreover:1 lowest:1 substantially:1 developed:1 proposing:1 dressing:1 transformation:1 corporation:1 indonesian:1 every:1 multidimensional:2 stuff:1 golden:2 tie:1 exactly:1 fat:2 classifier:2 preferable:1 biometrika:1 hit:1 crumb:3 crust:1 stick:1 cooking:4 appear:3 lsa:1 cuisine:3 local:1 frey:1 dropped:3 despite:1 mach:4 analyzing:1 chocolate:3 id:1 path:1 plus:1 resembles:1 shaded:2 relaxing:1 co:3 challenging:1 analytics:1 doesnt:1 statistically:1 unique:1 globerson:1 practical:1 procedure:6 area:11 significantly:1 thought:3 chechik:1 word:60 get:2 onto:2 close:3 marginalize:2 cannot:1 sheet:5 put:1 context:3 live:2 optimize:1 www:3 equivalent:3 demonstrated:1 map:2 center:5 go:1 dough:6 starting:2 independently:2 sigir:2 shen:1 kidney:1 roux:1 shredding:1 rule:1 importantly:1 embedding:11 feel:1 construction:1 heavily:1 magazine:1 user:2 smyth:1 lighter:1 us:1 exact:1 slip:1 origin:2 recognition:1 expensive:1 rec:2 lay:1 cut:3 vein:1 observed:2 bottom:1 capture:6 murino:2 apartment:1 wj:1 news:4 sticky:1 keyword:1 highest:1 alessandro:1 mu:1 complexity:6 constrains:1 sugar:2 thai:2 pawn:1 tight:1 depend:1 raise:1 baseball:2 upon:1 compactly:1 packing:1 multimodal:5 bowl:2 joint:1 stock:1 represented:7 talk:3 train:1 separated:1 heat:1 effective:4 query:1 artificial:2 lift:1 neighborhood:1 outside:1 mixer:2 quite:1 whose:4 larger:2 valued:1 cvpr:1 denser:1 emerged:1 otherwise:1 statistic:1 itself:1 final:1 online:1 advantage:1 pressing:3 inn:1 propose:2 product:4 relevant:1 bow:2 mixing:2 roweis:1 bed:1 dirac:1 gentle:1 recipe:20 convergence:1 double:1 darrell:1 extending:1 intl:1 produce:1 generating:1 incremental:1 leave:2 completion:2 pork:2 bosch:1 nearest:1 frying:4 borrowed:1 eq:9 strong:1 pot:2 cool:2 overnight:1 indicate:2 come:2 noodle:2 c:2 greek:1 radius:1 drawback:2 annotated:1 snow:1 bean:1 centered:2 disordered:1 material:2 require:2 feeding:1 generalization:1 broccoli:1 probable:1 sauce:3 mathematically:1 topping:1 insert:1 baked:1 hold:1 around:2 considered:6 ground:2 normal:1 exp:2 great:1 mapping:4 visualize:1 major:2 vary:1 arrange:1 proc:1 outperformed:1 bag:14 spreading:2 iw:2 label:3 bridge:1 repetition:2 create:1 weighted:2 minimization:2 pastry:2 mit:1 always:2 bake:3 aim:1 rather:1 avoid:1 varying:1 ax:1 focus:1 improvement:1 likelihood:2 lettuce:2 secure:1 cg:5 helpful:1 wooden:1 inference:3 dependent:1 cloth:2 typically:1 chow:1 hidden:1 italian:1 going:1 airtight:2 issue:1 classification:17 among:3 flexible:1 wln:1 priori:1 html:1 development:2 art:4 spatial:2 fairly:1 smoothing:1 cube:2 field:2 equal:6 once:1 drip:1 ng:2 sampling:1 washington:1 biology:1 identical:1 represents:3 hardest:1 icml:1 perina:4 thin:2 park:1 look:1 unsupervised:2 simplex:1 others:2 report:2 excessive:1 caesar:1 sweet:1 baking:2 few:2 tightly:1 zoom:4 asian:2 intell:3 individual:1 kitchen:1 geometry:2 consisting:1 microsoft:2 cylinder:1 circular:1 investigate:1 mining:1 highly:3 evaluation:1 mixture:10 extreme:1 light:1 edge:3 coat:2 closer:1 orthogonal:1 pancake:1 peach:1 indexed:1 literally:2 iv:2 cooky:1 circle:1 euclidean:2 ccgs:15 divide:4 continuing:1 abundant:1 re:1 celery:1 tongue:1 classify:3 removable:5 modeling:5 cover:2 contiguous:1 introducing:1 subset:3 uniform:1 graphic:1 tishby:1 reported:3 kn:29 thickness:1 damp:1 dir:2 accomplish:1 combined:1 thoroughly:1 definitely:1 international:3 peak:1 siam:1 probabilistic:2 picking:2 together:4 quickly:1 squared:1 opposed:2 choose:3 slowly:1 conf:1 wing:2 style:2 return:1 leading:1 potential:1 de:1 sec:1 wk:1 depends:2 stream:1 piece:3 performed:2 try:1 view:1 bone:1 helped:1 apparently:1 root:1 red:1 start:2 tion:1 observing:1 capability:1 tab:1 wnt:1 asuncion:1 fol:1 portion:1 biscuit:1 jia:1 contribution:2 square:1 stir:1 accuracy:7 roll:3 largely:2 efficiently:2 yield:4 reisinger:1 directional:1 inch:3 bayesian:1 produced:2 comp:3 worth:1 finer:1 mooney:1 aromatic:1 overtraining:3 reach:1 whenever:1 ed:1 definition:1 bicego:1 energy:2 hare:1 frequency:2 minka:1 resultant:1 associated:2 di:1 naturally:3 boil:3 couple:2 dataset:12 proved:1 covariation:1 subsection:1 knowledge:1 dimensionality:3 improves:1 afghan:4 variationally:1 positioned:1 carefully:1 back:1 appears:2 higher:1 day:4 follow:1 tom:1 modal:1 zisserman:1 evaluated:6 done:1 pound:1 just:3 until:1 langford:1 hand:1 salad:4 web:1 o:1 banerjee:1 marker:1 overlapping:4 rack:2 french:1 defines:2 nonlinear:2 brings:1 lda:26 gray:2 resemblance:1 yeast:3 grows:1 curry:1 usage:4 oil:2 usa:1 brown:3 true:1 isomap:1 jojic:4 semantic:4 illustrated:3 deal:1 misc:1 neal:1 white:2 round:3 spanish:1 chop:1 auc:2 m:1 plate:2 presenting:1 tt:1 demonstrate:1 performs:2 interface:4 silva:1 reasoning:1 meaning:1 variational:2 jack:1 novel:2 recently:2 image:3 shoot:1 wikipedia:4 multinomial:4 salt:2 volume:1 gently:1 extend:1 he:1 componential:19 significant:1 vanilla:1 grid:82 similarly:1 dot:2 moving:4 similarity:8 surface:3 etc:1 add:2 j:1 oven:3 own:2 recent:1 posterior:3 italy:1 dish:3 manipulation:1 occasionally:1 store:1 sherry:1 dataset2:1 exploited:1 captured:2 seen:2 additional:3 dont:1 minimum:1 greater:1 employed:1 freely:1 paradigm:2 full:2 desirable:2 bread:10 reduces:2 multiple:6 persian:1 mix:8 technical:1 upside:1 characterized:3 smooth:4 faster:1 collate:1 retrieval:6 long:3 lin:1 cross:1 post:3 bigger:1 prediction:1 variant:1 basic:3 breast:1 essentially:2 metric:1 cmu:3 vision:2 histogram:3 kernel:15 cz:1 represent:2 mex:1 invert:1 chicken:7 cell:5 whereas:1 want:1 fine:2 source:9 microarray:1 modality:1 container:1 rest:2 bringing:3 regularly:2 lafferty:1 effectiveness:1 tray:1 jordan:2 presence:2 silverthorn:1 counting:37 split:1 enough:1 embeddings:1 castellani:1 wn:20 exceed:1 fit:2 variety:2 identified:1 topology:1 approaching:1 reduce:1 idea:1 suboptimal:1 quilting:1 shift:5 politics:2 grill:1 thread:1 expression:1 six:1 akin:1 render:1 discr:1 deep:3 useful:4 iterating:1 vegetable:1 clear:1 aimed:1 disorganized:2 amount:2 slide:1 tenenbaum:1 locally:1 diameter:1 reduced:1 generate:4 http:1 outperform:2 percentage:2 problematic:1 spice:4 notice:1 correctly:1 per:2 bulk:1 blue:1 smoothe:2 curd:1 write:2 dropping:3 paste:1 discrete:1 group:1 recomputed:1 salient:1 terminology:1 threshold:2 basil:1 prevent:1 kept:2 rectangle:1 garlic:1 wood:1 sum:4 run:2 inverse:1 everywhere:1 noticing:1 uncertainty:2 striking:1 place:5 wok:3 lamb:1 electronic:1 reasonable:1 excerpt:1 bit:1 hi:6 correspondence:1 fold:6 quadratic:1 annual:1 msg:1 occur:1 placement:1 constraint:1 worked:1 grease:2 handful:2 scene:1 lightly:1 aspect:3 speed:2 brandy:1 format:1 palm:1 structured:1 melt:1 nectar:1 ball:3 pink:1 remain:2 describes:1 em:3 slightly:3 across:3 smaller:3 island:3 sam:6 pan:1 wi:2 shallow:2 pumpkin:1 happens:1 iccv:1 invariant:1 napoli:1 taken:1 ln:30 goose:1 visualization:5 previously:1 turn:5 loose:1 fail:1 count:4 german:1 needed:3 know:1 available:4 apply:1 observe:1 hierarchical:1 away:1 appropriate:2 enforce:1 fry:6 occurrence:3 appearing:1 save:1 alternative:1 encounter:2 batch:1 altogether:1 original:2 kln:1 cake:1 dirichlet:8 andrzej:1 clustering:1 thomas:1 graphical:1 top:2 assumes:1 saint:1 yx:4 somewhere:1 exploit:1 carrot:1 especially:2 chinese:1 approximating:1 society:2 move:1 skin:1 intend:1 font:1 strategy:7 usual:2 distance:1 mapped:2 sci:1 capacity:6 restart:1 gun:1 evenly:1 topic:37 manifold:1 extent:1 water:3 assuming:1 length:2 code:1 modeled:3 index:1 illustration:1 relationship:2 tiramisu:1 ratio:1 carcass:1 dunson:2 mostly:1 noni:1 rise:3 negative:1 pizza:10 anal:3 agenda:1 teh:1 upper:1 wire:1 convolution:2 datasets:3 benchmark:1 howard:1 optional:1 beat:3 subsume:1 hinton:1 incorporate:1 topical:1 dc:1 sharp:1 jebara:1 community:2 inferred:1 introduced:3 palette:2 pair:1 specified:1 nomi:1 connection:1 learned:7 distinction:1 ultinomial:1 textual:1 hour:2 starter:1 trans:4 informaion:1 nip:1 redmond:1 parallelism:1 pattern:5 appeared:1 tb:1 bite:1 built:1 memory:4 video:1 power:1 overlap:2 hot:2 ranked:1 warm:1 hybrid:3 afs:1 critical:1 natural:1 raina:1 indicator:1 representing:1 tasty:1 picture:4 axis:2 admixture:4 ready:2 carried:1 extract:2 text:11 prior:4 geometric:2 evolve:2 determining:1 relative:4 embedded:1 expect:1 highlight:2 mixed:2 generation:1 allocation:5 borrows:1 ingredient:4 validation:3 illuminating:1 downloaded:1 degree:1 share:3 elsewhere:1 course:1 token:1 placed:3 keeping:1 eastern:1 theo:1 free:2 neat:1 english:1 lle:1 side:3 allow:1 noz:1 neighbor:1 saul:1 wide:1 basu:1 emerge:1 sparse:1 benefit:2 slice:1 overcome:1 curve:2 vocabulary:7 rum:1 cumulative:1 world:5 dimension:2 plain:1 stand:1 collection:1 qualitatively:1 doesn:1 far:1 employing:1 welling:1 compact:1 meat:2 keep:1 cavity:1 tains:1 rib:2 cheese:1 investigating:1 global:1 uai:1 corpus:8 assumed:1 discriminative:4 freshly:1 spectrum:1 don:1 search:5 iterative:1 latent:11 table:2 nature:3 pack:1 learn:1 elastic:1 obtaining:3 beef:2 japanese:1 electric:3 yolk:1 spread:2 bounding:1 border:1 big:2 atheism:1 fig:16 referred:1 egg:7 board:1 screen:1 position:11 thematic:2 duck:1 inferring:1 pereira:1 comput:1 cgs:5 breaking:1 ffts:1 grained:2 minute:2 down:1 embed:1 sift:1 list:3 beaten:1 alt:1 svm:1 sit:1 intractable:1 exists:1 importance:1 ccg:15 texture:1 racy:1 push:1 corrlda:1 margin:1 justifies:1 cookie:1 smoker:1 entropy:1 gyro:2 intersection:1 explore:1 visual:2 cristani:1 dessert:3 sport:1 brush:2 acm:2 rice:1 extracted:1 towel:1 absence:1 content:2 change:1 hard:1 included:1 butter:1 semantically:1 wt:3 averaging:1 mexican:1 total:1 called:1 newsgroup:3 formally:1 latter:2 indian:3 skim:1 correlated:2 |
4,577 | 5,141 | On Algorithms for Sparse Multi-factor NMF
Siwei Lyu
Xin Wang
Computer Science Department
University at Albany, SUNY
Albany, NY 12222
{slyu,xwang26}@albany.edu
Abstract
Nonnegative matrix factorization (NMF) is a popular data analysis method, the
objective of which is to approximate a matrix with all nonnegative components
into the product of two nonnegative matrices. In this work, we describe a new
simple and efficient algorithm for multi-factor nonnegative matrix factorization
(mfNMF) problem that generalizes the original NMF problem to more than two
factors. Furthermore, we extend the mfNMF algorithm to incorporate a regularizer
based on the Dirichlet distribution to encourage the sparsity of the components of
the obtained factors. Our sparse mfNMF algorithm affords a closed form and an
intuitive interpretation, and is more efficient in comparison with previous works
that use fix point iterations. We demonstrate the effectiveness and efficiency of
our algorithms on both synthetic and real data sets.
1
Introduction
The goal of nonnegative matrix factorization (NMF) is to approximate a nonnegative matrix V
with the product of two nonnegative matrices, as V ? W1 W2 . Since the seminal work of [1] that
introduces simple and efficient multiplicative update algorithms for solving the NMF problem, it has
become a popular data analysis tool for applications where nonnegativity constraints are natural [2].
In this work, we address the multi-factor NMF (mfNMF) problem, where a nonnegative matrix V is
QK
approximated with the product of K ? 2 nonnegative matrices, V ? k=1 Wk . It has been argued
that using more factors in NMF can improve the algorithm?s stability, especially for ill-conditioned
and badly scaled data [2]. Introducing multiple factors into the NMF formulation can also find
practical applications, for instance, extracting hierarchies of topics representing different levels of
abstract concepts in document analysis or image representations [2].
Many practical applications also require the obtained nonnegative factors to be sparse, i.e., having
many zero components. Most early works focuses on the matrix `1 norm [6], but it has been pointed
out that `1 norm becomes completely toothless for factors that have constant `1 norms, as in the case
of stochastic matrices [7, 8]. Regularizers based on the entropic prior [7] or the Dirichlet prior [8]
have been shown to be more effective but do not afford closed-form solutions.
The main contribution of this work is therefore two-fold. First, we describe a new algorithm for
the mfNMF problem. Our solution to mfNMF seeks optimal factors that minimize the total difQK
ference between V and k=1 Wk , and it is based on the solution of a special matrix optimization
problem that we term as the stochastic matrix sandwich (SMS) problem. We show that the SMS
problem affords a simple and efficient algorithm consisting of only multiplicative update and normalization (Lemma 2). The second contribution of this work is a new algorithm to incorporate the
Dirichlet sparsity regularizer in mfNMF. Our formulation of the sparse mfNMF problem leads to
a new closed-form solution, and the resulting algorithm naturally embeds sparsity control into the
mfNMF algorithm without iteration (Lemma 3). We further show that the update steps of our sparse
1
mfNMF algorithm afford a simple and intuitive interpretation. We demonstrate the effectiveness and
efficiency of our sparse mfNMF algorithm on both synthetic and real data.
2
Related Works
Most exiting works generalizing the original NMF problem to more than two factors are based on
the multi-layer NMF formulation, in which we solve a sequence of two factor NMF problems, as
V ? W1 H1 , H1 ? W2 H2 , ? ? ? , and HK?1 ? WK?1 WK [3?5]. Though simple and efficient,
such greedy algorithms are not associated with a consistent objective function involving all factors
simultaneously. Because of this, the obtained factors may be suboptimal measured by the difference
between the target matrix V and their product. On the other hand, there exist much fewer works
directly solving the mfNMF problem, one example is a multiplicative algorithm based on the general
Bregmann divergence [9]. In this work, we focus on the generalized Kulback-Leibler divergence (a
special case of the Bregmann divergence), and use its decomposing property to simplify the mfNMF
objective and remove the undesirable multitude of equivalent solutions in the general formulation.
These changes lead to a more efficient algorithm that usually converges to a better local minimum
of the objective function in comparison of the work in [9] (see Section 6).
As a common means to encouraging sparsity in machine learning, the `1 norm has been incorporated into the NMF objective function [6] as a sparsity regularizer. However, the `1 norm may be
less effective for nonnegative matrices, for which it reduces to the sum of all elements and can be
decreased trivially by scaling down all factors without affecting the number of zero components.
Furthermore, the `1 norm becomes completely toothless in cases when the nonnegative factors are
constrained to have constant column or row sums (as in the case of stochastic matrices).
An alternative solution is to use the Shannon entropy of each column of the matrix factor as sparsity
regularizer [7], since a vector with unit `1 norm and low entropy implies that only a few of its components are significant. However, the entropic prior based regularizer does not afford a closed form
solution, and an iterative fixed point algorithm is described based on the Lamber?s W-function in [7].
Another regularizer is based on the symmetric Dirichlet distribution with concentration parameter
? < 1, as such a model allocates more probability weights to sparse vectors on a probability simplex [8, 10, 11]. However, using the Dirichlet regularizer has a practical problem, as it can become
unbounded when there is a zero element in the factor. Simply ignoring such cases as in [11] can
lead to unstable algorithm (see Section 5.2). Two approaches have been described to solve this problem, one is based on the constrained concave-convex procedure (CCCP) [10]. The other uses the
psuedo-Dirichlet regularizer [8], which is a bounded perturbation of the original Dirichlet model. A
drawback common to these methods is that they require extra iterations for the fix point algorithm.
Furthermore, the effects of the updating steps on the sparsity of the resulting factors are obscured by
the iterative steps. In contrast, our sparse mfNMF algorithm uses the original Dirichlet model and
does not require extra fix point iteration. More interestingly, the update steps of our sparse mfNMF
algorithm afford a simple and intuitive interpretation.
3
Basic Definitions
We denote 1m as the all-one column vector of dimension m and Im as the m-dimensional identity
matrix, and use V ? 0 or v ? 0 to indicate that all elements of a matrix V or a vector v are
nonnegative. Throughout the paper, we assume a matrix does not have all-zero columns or rows. An
m ? n nonnegative matrix V is stochastic if V T 1m = 1n , i.e., each column has a total sum of one.
Also, for stochastic matrices W1 and W2 , their product W1 W2 is also stochastic. Furthermore, an
m ? n nonnegative matrix V can be uniquely represented as V = SD, with an n ? n nonnegative
diagonal matrix D = diag(V T 1m ) and an m ? n stochastic matrix S = V D?1 .
For nonnegative matrices V and W , their generalized Kulback-Leibler (KL) divergence [1] is defined as
m X
n
X
Vij
d(V, W ) =
Vij log
? Vij + Wij .
(1)
Wij
i=1 j=1
2
We have d(V, W ) ? 0 and the equality holds if and only if V = W 1 . We emphasize the following
decomposition of the generalized KL divergence: representing V and W as products of stochastic
matrices and diagonal matrices, V = S (V ) D(V ) and W = S (W ) D(W ) , we can decompose d(V, W )
into two terms involving only stochastic matrices or diagonal matrices, as
" (V ) #
m X
n
X
Sij
(W ) (V )
(V )
(W )
(V )
(W )
d(V, W )=d V, S
+ d D ,D
=
D
+
d
D
,
D
Vij log
(W )
.
Sij
i=1 j=1
(2)
Due to space limit, the proof of Eq.(2) is deferred to the supplementary materials.
4
Multi-Factor NMF
In this work, we study the multi-factor NMF problem based on the generalized KL divergence.
Specifically, given an m ? n nonnegative matrix V , we seek K ? 2 matrices Wk of dimensions
QK
lk?1 ? lk for k = 1, ? ? ? , K, with l0 = m and lK = n that minimize d(V, k=1 Wk ), s.t., Wk ? 0.
This simple formulation has a drawback as it is invariant to relative scalings between the factors:
for any ? > 0, we have d(V, W1 ? ? ? Wi ? ? ? Wj ? ? ? WK ) = d(V, W1 ? ? ? (?Wi ) ? ? ? ( ?1 Wj ) ? ? ? WK ).
In other words, there exist infinite number of equivalent solutions, which gives rise to an intrinsic
ill-posed nature of the mfNMF problem.
To alleviate this problem, we constrain the first K ? 1 factors, W1 , ? ? ? , WK?1 , to be stochastic
matrices, and differentiate the notationns with X1 , ? ? ? , XK?1 . Using the property of nonnegative
matrices, we represent the last nonnegative factor WK as the product of a stochastic matrix XK
QK
and a nonnegative diagonal matrix D(W ) . As such, we represent the nonnegative matrix k=1 Wk
QK
as the product of a stochastic matrix S (W ) = k=1 Xk and a diagonal matrix D(W ) . Similarly,
we also decompose the target nonnegative matrix V as the product of a stochastic matrix S (V )
and a nonnegative diagonal matrix D(V ) . It is not difficult to see that any solution from this more
constrained formulation leads to a solution to the original problem and vice versa.
Applying the decomposition in Eq.(2), the mfNMF optimization problem can be re-expressed as
Q
K
(V )
min
d V,
+ d D(V ) , D(W )
k=1 Xk D
X1 ,??? ,XK ,D (W )
s.t.
XkT 1lk?1 = 1lk , Xk ? 0, k = 1, ? ? ? , K, D(W ) ? 0.
As such, the original problem is solved with two sub-problems, each for a different set of variables.
The first sub-problem solves for the diagonal matrix D(W ) , as:
min d D(V ) , D(W ) , s.t. D(W ) ? 0.
D (W )
Per the property of the generalized KL divergence, its solution is trivially given by D(W ) = D(V ) .
The second sub-problem optimizes the K stochastic factors, X1 , ? ? ? , XK , which, after dropping
irrelevant constants and rearranging terms, becomes
!
m X
n
K
X
Y
max
Vij log
Xk
, s.t. XkT 1lk?1 = 1lk , Xk ? 0, k = 1, ? ? ? , K.
(3)
X1 ,??? ,XK
i=1 j=1
k=1
ij
Note that Eq.(3) is essentially the maximization of the similarity between the stochastic part of V ,
S V with the stochastic matrix formed as the product of the K stochastic matrices X1 , ? ? ? , XK ,
weighted by DV .
4.1
Stochastic Matrix Sandwich Problem
Before describing the algorithm solving (3), we first derive the solution to another related problem
that we term as the stochastic matrix sandwich (SMS) problem, from which we can construct a
solution to (3). Specifically, in an SMS problem one minimizes the following objective function
with regards to an m0 ? n0 stochastic matrix X, as
1
In computing the generalized KL divergence, we define 0 log 0 = 0 and
3
0
0
= 0.
max
X
m X
n
X
i=1 j=1
Cij log (AXB)ij , s.t. X T 1m0 = 1n0 , X ? 0,
(4)
where A and B are two known stochastic matrices of dimensions m ? m0 and n0 ? n, respectively,
and C is an m ? n nonnegative matrix.
We note that (4) is a convex optimization problem [12], which can be solved with general numerical
procedures such as interior-point methods. However, we present here a new simple solution based
on multiplicative updates and normalization, which completely obviates control parameters such as
the step sizes. We first show that there exists an ?auxiliary function? to log (AXB)ij .
Lemma 1. Let us define
m0 X
n0
X
? kl Blj
Xkl ?
Aik X
? =
log
AXB
,
Fij (X; X)
? kl
ij
?
X
AXB
k=1 l=1
ij
? ? log (AXB) and Fij (X; X) = log (AXB) .
then we have Fij (X; X)
ij
ij
Proof of Lemma 1 can be found in the supplementary materials.
Based Lemma 1 we can develop an EM style iterative algorithm to optimize (4), in which, starting
with an initial values X = X0 , we iteratively
solve
the following optimization problem,
m X
n
X
Xt+1 ?
argmax
Cij Fij (X; Xt ) and t ? t + 1.
(5)
X:X T 1m0 =1n0 ,X?0 i=1 j=1
Using
in Lemma 1, we have: X
X the relations given X
X
Cij log (AXt B)ij =
Cij Fij (Xt ; Xt ) ?
Cij Fij (Xt+1 ; Xt ) ?
Cij log (AXt+1 B)ij ,
i,j
i,j
i,j
i,j
which shows that each iteration of (5) leads to feasible X and does not decrease the objective function of (4). Rearranging terms and expressing results using matrix operations, we can simplify the
objective function of (5) as
m0 X
n0
X
X
? =
Cij Fij (X; X)
Mkl log Xkl + const,
(6)
where
i,j
k=1 l=1
h
i
? ? AT C AXB
?
M =X
BT ,
(7)
where ? and correspond to element-wise matrix multiplication and division, respectively. A
detailed derivation of (6) and (7) is given in the supplemental materials. The following result shows
that the resulting optimization has a closed-form solution.
Lemma 2. The global optimal
solution
to the following optimization problem,
m0 X
n0
X
max
Mkl log Xkl , s.t. X T 1m0 = 1n0 , X ? 0,
(8)
X
is given by
k=1 l=1
Mkl
.
Xkl = P
k Mkl
Proof of Lemma 2 can be found in the supplementary materials.
Next, we can construct a coordinate-wise optimization solution to the mfNMF problem (3) that
iteratively optimizes each Xk while fixing the others, based on the solution to the SMS problem
given in Lemma 2. In particular, it is easy to see that for C = V ,
? solving for X1 with fixed X2 , ? ? ? , XK is an SMS problem with A = Im , X = X1 and
QK
B = k=2 Xk ;
? solving for Xk , k = 2, ? ? ? , K ? 1, with fixed X1 , ? ? ? , Xk?1 , Xk+1 , ? ? ? , XK is an SMS
Qk?1
QK
with A = k0 =1 Xk0 , X = Xk , and B = k0 =k+1 Xk0 ;
QK?1
? and solving for XK with fixed X1 , ? ? ? , XK?1 is an SMS problem with A = k=1 Xk ,
X = XK and B = In .
In practice, we do not need to run each SMS optimization to converge, and the algorithm can be
implemented with a few fixed steps updating each factor in order.
It should be pointed out that even though SMS is a convex optimization problem guaranteed with
a global optimal solution, this is not the case for the general mfNMF problem (3), the objective
function of which is non-convex (it is an example of the multi-convex function [13]).
4
5
Sparse Multi-Factor NMF
Next, we describe incorporating sparsity regularization in the mfNMF formulation. We assume that
the sparsity requirement is applied to each individual factor in the mfNMF objective function (3), as
!
K
K
X
Y
X
max
Vij log
Xk
+
`(Xk ), s.t. XkT 1lk?1 = 1lk , Xk ? 0,
(9)
X1 ,??? ,XK
i,j
k=1
ij
k=1
where `(X) is the sparsity regularizer that is larger for stochastic matrix X with more zero entries.
As the overall mfNMF can be solved by optimizing each individual factor in an SMS problem, we
focus here on the case where the sparsity regularizer of each factor is introduced in (4), to solve
X
max
Cij log (AXB)ij + `(X), s.t. X T 1m0 = 1n0 , X ? 0.
(10)
X
5.1
i,j
Dirichlet Sparsity Regularizer
As we have
P mentioned, the typical choice of `(?) as the matrix `1 norm is problematic in (10), as
kXk1 = ij Xij = n0 is a constant. On the other hand, if we treat each column of X as a point on
a probability simplex, as their elements are nonnegative and sum to one, then we can induce a sparse
regularizer from the Dirichlet distribution. Specifically, a Dirchilet distribution of d-dimensional
?(d?) Qd
??1
, where ?(?) is the standard
vectors v : v ? 0, 1T v = 1 is defined as Dir(v; ?) = ?(?)
d
k=1 vk
Gamma function2 . The parameter ? ? [0, 1] is the parameter that controls the sparsity of samples ?
smaller ? corresponds to higher likelihood of sparse v in Dir(v; ?).
Incorporating a Dirichlet regularizer with parameter ?l to each column of X and dropping irrelevant
constant terms, (10) reduces to3
m0 X
n
n0
m X
X
X
Cij log (AXB)ij +
(?l ? 1) log Xkl , s.t. X T 1m0 = 1n0 , X ? 0.
max
(11)
X
i=1 j=1
k=1 l=1
As in the case of mfNMF, we introduce the auxiliary function of log(AXB)ij to form an upperbound of (11) and use an EM-style algorithm to optimize (11). Using the result given in Eqs.(6) and
(7), the optimization problem can be further simplified as:
m0 X
n0
X
(Mkl + ?l ? 1) log Xkl , s.t. X T 1m0 = 1n0 , X ? 0.
(12)
max
X
5.2
k=1 l=1
Solution to SMS with Dirichlet Sparse Regularizer
However, a direct optimization of (12) is problematic when ?l < 1: if there exists Mkl < 1 ? ?l , the
objective function of (12) becomes non-convex and unbounded ? the term (Mkl + ?l ? 1) log Xkl
approaches ? as Xkl ? 0. This problem is addressed in [8] by modifying the definition of the
Dirichlet regularizer in (11) to (?l ? 1) log(Xkl + ), where > 0 is a predefined parameter. This
avoids the problem of taking logarithm of zero, but it leads to a less efficient algorithm based on an
iterative fix point procedure. In addition, the fix point algorithm is difficult to interpret as its effect
on the sparsity of the obtained factors is obscured by the iterative steps.
On the other hand, notice that if we tighten the nonnegativity constraint to Xkl ? , the objective
function of (12) will always be finite. Therefore, we can simply modify the optimization of (12) the
objective function to become infinity, as:
0
max
X
0
m X
n
X
k=1 l=1
(Mkl + ?l ? 1) log Xkl , s.t. X T 1m0 = 1n0 , Xkl ? , ?k, l.
(13)
The following result shows that with a sufficiently small , the constrained optimization problem in
(13) has a unique global optimal solution that affords a closed-form and intuitive interpretation.
2
For simplicity, we only discuss the symmetric Dirichlet model, but the method can be easily extended to
the non-symmetric Dirichlet model with different ? value for different dimension.
3
Alternatively, this special case of NMF can be formulated as C = AXB + E, where E contains independent Poisson samples [14], and (11) can be viewed as a (log) maximum a posteriori estimation of column
vectors of X with a Poisson likelihood and symmetric Dirichlet prior.
5
case 1
Mkl
1
case 2
? kl
X
1
?l
k
? kl
X
Mkl
case 3
? kl
X
Mkl
1 ?l
?l
k
k
k
k
k
Figure 1: Sparsification effects on the updated vectors before (left) and after (right) applying the algorithm
given in Lemma 3, with each column illustrating one of the three cases.
Lemma
we assume Mkl 6= 1 ? ?l , ?k, l4 . If we choose a constant
3. Without loss of generality,
minkl {|Mkl +?l ?1|}
? 0, m0 maxkl {|Mkl +?l ?1|} , and for each column l define Nl = {k|Mkl < 1 ? ?l } as the set of
elements with Mkl + ?l ? 1 < 0, then the following is the global optimal solution to (13):
? case 1. |Nl | = 0, i.e., all constant coefficients of (13) are positive,
? kl = P Mkl + ?l ? 1
,
X
0
k0 [Mk l + ?l ? 1]
(14)
? case 2. 0 < |Nl | < m0 , i.e., the constant coefficients of (13) have mixed signs,
? kl = ? ? [k ? Nl ] + P (1 ? |Nl |) [Mkl + ?l ? 1]
X
? ? [k 6? Nl ] ,
0
0
k0 {[Mk l + ?l ? 1] ? ? [k 6? Nl ]}
where ?(c) is the Kronecker function that takes 1 if c is true and 0 otherwise.
(15)
? case 3. |Nl | = m0 , i.e., all constant coefficients of (13) are negative,
"
#
"
#
0
? kl = (1 ? (m ? 1)) ? ? k = argmax Mk0 l + ? ? k 6= argmax Mk0 l . (16)
X
k0 ?{1,??? ,m0 }
k0 ?{1,??? ,m0 }
Proof of Lemma 3 can be found in the supplementary materials. Note that the algorithm provided in
Lemma 3 is still valid when = 0, but the theoretical result of it attaining the global optimum of a
finite optimization problem only holds for satisfying the condition in Lemma 3.
We can provide an intuitive interpretation to Lemma 3, which is schematically illustrated in Fig.1
for a toy example. For the first case (first column of Fig.1) when all constant coefficients of (13) are
positive, it simply reduces to first decrease every Mkl by 1??l and then renormalize each column to
sum to one, Eq.(14). This operation of reducing the same amount from all elements in one column
of M has the effect of making ?the rich get richer and the poor get poorer? (known as Dalton?s 3rd
law), which increases the imbalance of the elements and improves the chances of small elements
to be reduced to zero in the subsequent steps [15]5 . In the second case (second column of Fig.1),
when the coefficients of (13) have mixed signs, the effect of the updating step in (15) is two-fold.
For Mkl < 1 ? ?l (first term in Eq.(15)), they are all reduced to , which is the de facto zero. In
other words, components below the threshold 1 ? ?l are eliminated to zero. On the other hands,
terms with Mkl > 1 ? ?l (second term in Eq.(15)) are redistribute with the operation of reduction of
1 ? ?l followed by renormalization. In the last case when all coefficients of (13) are negative (third
column of Fig.1), only the element corresponding to Mkl that is closest to the threshold 1 ? ?k , or
equivalently, the largest of all Mkl , will survive with a non-zero value that is essentially 1 (first term
in Eq.(16)), while the rest of the elements all become extinct (second term in Eq.(16)), analogous to
a scenario of ?survival of the fittest?. Note that it is the last two cases actually generating zero entries
in the factors, but the first case makes more entries suitable for being set to zero. The thresholding
and renormalization steps resemble algorithms in sparse coding [16].
6
Experimental Evaluations
We perform experimental evaluations of the sparse multi-factor NMF algorithm using synthetic and
real data sets. In the first set of experiments, we study empirically the convergence of the multiplicative algorithm for the SMS problem (Lemma 2). Specifically, with several different choices of
4
It is easy to show that the optimal solution in this case is Xkl = 0, i.e., setting the corresponding component
in X to zero. So we can technically ignore such elements for each column index l.
5
Some early works (e.g., [11]) obtain simpler solution by setting negative Mk0 l + ?l ? 1 to zero followed
by normalization. Our result shows that such a solution is not optimal.
6
0
10
1
10
2
3
10
10
4
10
5
10
m=90,n=50
m?=20,n?=5
pgd
mult
0
10
itr #
1
10
2
3
10
10
4
10
5
10
obj. fun
obj. fun
obj. fun
obj. fun
m=20,n=20
m?=10,n?=10
pgd
mult
m=200,n=100
m?=50,n?=25
pgd
mult
0
10
1
10
itr #
2
3
10
10
4
10
5
10
m=1000,n=200
m?=35,n?=15
pgd
mult
0
10
itr #
1
10
2
3
10
10
4
10
5
10
itr #
Figure 2: Convergences of the SMS objective function with multiplicative update algorithm (mult solid curve)
and the projected gradient ascent method (pgd dashed curve) for different problem sizes.
(m, n, m0 , n0 ), we randomly generate stochastic matrices A (m ? m0 ) and B (n0 ? n), and nonnegative matrix C (m ? n). We then apply the SMS algorithm to solve for the optimal X. We
compare our algorithm with a projected gradient ascent optimization of the SMS problem, which
updates X using the gradient of the SMS objective function and chooses a step size to satisfy the
nonnegative and normalization constraints. We do not consider methods that use the Hessian matrix of the objective function, as constructing a general Hessian matrix in this case have prohibitive
memory requirement even for a medium sized problem. Shown in Fig.2 are several runs of the two
algorithms starting at the same initial values, as the the objective function of SMS vs. the number of
updates of X. Because of the convex nature of the SMS problem, both algorithms converge to the
same optimal value regardless of the initial values. On the other hand, the multiplicative updates for
SMS usually achieve two order speed up in terms of the number of iterations and typically about 10
times faster in running time when compared to the gradient based algorithm.
In the second set of experiments, we evaluate the performance of the coordinate-update mfNMF
algorithm based on the multiplicative updating algorithm of the SMS problem (Section 4.1). Specifically, we consider the mfNMF problem that approximates a randomly generated target nonnegative
matrix V of dimension m?n with the product of three stochastic factors, W1 (m?m0 ), W2 (m0 ?n0 ),
and W3 (n0 ? n). The performance of the algorithm is evaluated by the logarithm of the generalized
KL divergence for between V and W1 W2 W3 , of which lower numerical values suggest better performances. As a comparison, we also implemented a multi-layer NMF algorithm [5], which solves
two NMF problems in sequence, as: V ? W1 V? and V? ? W2 W3 , and the multiplicative update
algorithm of mfNMF of [9], both of which are based on the generalized KL divergence. To make
the comparison fair, we start all three algorithms with the same initial values.
m, n, m0 , n0
multi-layer NMF [5]
multi-factor NMF [9]
multi-factor NMF (this work)
50,40,30,10
1.733
1.431
1.325
200,100,60,30
2.595
2.478
2.340
1000,400,200,50
70.526
66.614
62.086
5000,2000,100,20
183.617
174.291
161.338
Table 1: Comparison of the multi-layer NMF method and two mfNMF methods for a three factor with different
problem sizes. The values correspond to the logarithm of generalized KL divergence, log d(V, W1 W2 W3 ).
Lower numerical values (in bold font) indicate better performances.
The results of several runs of these algorithms for different problem sizes are summarized in Table
1, which show that in general, mfNMF algorithms lead to better solutions corresponding to lower
generalized KL divergences between the target matrix and the product of the three estimated factors.
This is likely due to the fact that these algorithms optimize the generalized KL divergence directly,
while multi-layer NMF is a greedy algorithm with sub-optimal solutions. On the other hand, our
mfNMF algorithm consistently outperforms the method of [9] by a significant margin, with on
average 40% less iterations. We think the improved performance and running efficiency are due
to our formulation of the mfNMF problem based on stochastic matrices, which reduces the solution
space and encourage convergence to a better local minimum of the objective function.
We apply the sparse mfNMF algorithm to data converted from grayscale images from the MNIST
Handwritten Digits data set [17] that are vectorized to column vectors and normalized to have total
sum of one. All vectorized and normalized images are collected to form the target stochastic matrix V , which are decomposed into the product of three factors W1 W2 W3 . We also incorporate the
Dirichlet sparsity regularizers with different configurations. For simplicity, we use the same parameter for all column vectors in one factor. The threshold is set as = 10?8 /n where n is the total
number of images. Shown in Fig.3 are the decomposition results corresponding to 500 vectorized
20 ? 20 images of handwritten digit 3, that are decomposed into three factors of size 400 ? 196,
7
W1
W2
W 1 W2
W3
W1 W2 W3
Figure 3: Sparse mfNMF algorithm on the handwritten digit images. The three rows correspond to three cases
as: ?1 = 1, ?2 = 1, ?3 = 1, ?1 = 1, ?2 = 1, ?3 = 0.99, ?1 = 1, ?2 = 0.99, ?3 = 0.99, respectively. See
texts for more details.
196 ? 100, and 100 ? 500. The columns of the factors are reshaped to shown as images, where the
brightness of each pixel in the figure is proportional to the nonnegative values in the corresponding
factors. Due to space limit, we only show the first 25 columns in each factor. All three factorization
results can reconstruct the target matrix (last column), but they put different constraints on the obtained factors. The factors are also visually meaningful: factor W1 contains low level components
of the images that when combined with factor W2 forms more complex structures. The first row
corresponds to running the mfNMF without sparsity regularizer. The two rows below correspond
to the cases when the Dirichlet sparsity regularizer is applied to the third factor and to the second
and third factor simultaneously. Compare with the corresponding results in the non-sparse case,
the obtained factors contain more zeros. As a comparison, we also implement mfNMF algorithm
using a pseudo-Dirichlet sparse regularizer [8]. With similar decomposition results, our algorithm
is typically 3 ? 5 times faster as it does not require the extra iterations of a fix point algorithm.
7
Conclusion
We describe in this work a simple and efficient algorithm for the sparse multi-factor nonnegative
matrix factorization (mfNMF) problem, involving only multiplicative update and normalization. Our
solution to incorporate Dirichlet sparse regularizer leads to a closed form solution and the resulting
algorithm is more efficient than previous works based on fix point iterations. The effectiveness and
efficiency of our algorithms are demonstrated on both synthetic and real data sets.
There are several directions we would like to further explore. First, we are studying if similar
multiplicative update algorithm also exists for mfNMF with more general similarity norms such
as Csizar?s divergence [18], Itakura-Saito divergence, [19], ?-? divergence [20] or the Bregmann
divergence [9]. We will also study incorporating other constraints (e.g., value ranges) over the
factors into the mfNMF algorithm. Last, we would like to further study applications of mfNMF
in problems such as co-clustering or hierarchical document topic analysis, exploiting its ability to
recover hierarchical decomposition of nonnegative matrices.
Acknowledgement
This work is supported by the National Science Foundation under Grant Nos. IIS-0953373, IIS1208463 and CCF-1319800.
8
References
[1] Daniel D. Lee and H. Sebastian Seung. Algorithms for nonnegative matrix factorization. In Advances in
Neural Information Processing Systems (NIPS 13), 2001. 1, 2
[2] A. Cichocki, R. Zdunek, A.H. Phan, and S. Amari. Nonnegative Matrix and Tensor Factorizations:
Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, 2009. 1
[3] Seungjin Choi Jong-Hoon Ahn and Jong-Hoon Oh. A multiplicative up-propagation algorithm. In ICML,
2004. 2
[4] Nicolas Gillis and Fran cois Glineur. A multilevel approach for nonnegative matrix factorization. Journal
of Computational and Applied Mathematics, 236 (7):1708?1723, 2012. 2
[5] A Cichocki and R Zdunek. Multilayer nonnegative matrix factorisation. Electronics Letters, 42(16):947?
948, 2006. 2, 7
[6] Patrik O. Hoyer and Peter Dayan. Non-negative matrix factorization with sparseness constraints. Journal
of Machine Learning Research, 5:1457?1469, 2004. 1, 2
[7] Bhiksha Raj Madhusudana Shashanka and Paris Smaragdis. Sparse overcomplete latent variable decomposition of counts data. In NIPS, 2007. 1, 2
[8] Martin Larsson and Johan Ugander. A concave regularization technique for sparse mixture models. In
NIPS, 2011. 1, 2, 5, 8
[9] Suvrit Sra and Inderjit S Dhillon. Nonnegative matrix approximation: Algorithms and applications.
Computer Science Department, University of Texas at Austin, 2006. 2, 7, 8
[10] Jussi Kujala. Sparse topic modeling with concave-convex procedure: EMish algorithm for latent dirichlet
allocation. In Technical Report, 2004. 2
[11] Jagannadan Varadarajan, R?emi Emonet, and Jean-Marc Odobez. A sequential topic model for mining
recurrent activities from long term video logs. International Journal of Computer Vision, 103(1):100?
126, 2013. 2, 6
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2005. 4
[13] P. Gahinet, P. Apkarian, and M. Chilali. Affine parameter-dependent Lyapunov functions and real parametric uncertainty. IEEE Transactions on Automatic Control, 41(3):436?442, 1996. 4
[14] Wray Buntine and Aleks Jakulin. Discrete component analysis. In Subspace, Latent Structure and Feature
Selection Techniques. Springer-Verlag, 2006. 5
[15] N. Hurley and Scott Rickard. Comparing measures of sparsity. Information Theory, IEEE Transactions
on, 55(10):4723?4741, 2009. 6
[16] Misha Denil and Nando de Freitas. Recklessly approximate sparse coding. CoRR, abs/1208.0959, 2012.
6
[17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998. 7
[18] Andrzej Cichocki, Rafal Zdunek, and Shun-ichi Amari. Csiszar?s divergences for non-negative matrix factorization: Family of new algorithms. In Independent Component Analysis and Blind Signal Separation,
pages 32?39. Springer, 2006. 8
[19] C?edric F?evotte, Nancy Bertin, and Jean-Louis Durrieu. Nonnegative matrix factorization with the itakurasaito divergence: With application to music analysis. Neural Computation, 21(3):793?830, 2009. 8
[20] Andrzej Cichocki, Rafal Zdunek, Seungjin Choi, Robert Plemmons, and Shun-Ichi Amari. Non-negative
tensor factorization using alpha and beta divergences. In Acoustics, Speech and Signal Processing, 2007.
ICASSP 2007. IEEE International Conference on, volume 3, pages III?1393. IEEE, 2007. 8
[21] V. Chv?atal. Linear Programming. W. H. Freeman and Company, New York, 1983.
9
| 5141 |@word illustrating:1 norm:9 seek:2 decomposition:6 brightness:1 solid:1 edric:1 reduction:1 electronics:1 configuration:1 contains:2 initial:4 daniel:1 document:3 interestingly:1 outperforms:1 freitas:1 comparing:1 subsequent:1 numerical:3 remove:1 update:13 n0:20 v:1 greedy:2 fewer:1 prohibitive:1 xk:27 ugander:1 simpler:1 unbounded:2 direct:1 become:4 beta:1 introduce:1 x0:1 multi:17 plemmons:1 itakurasaito:1 freeman:1 decomposed:2 company:1 encouraging:1 chv:1 becomes:4 provided:1 bounded:1 medium:1 minimizes:1 supplemental:1 sparsification:1 pseudo:1 every:1 fun:4 concave:3 axt:2 scaled:1 facto:1 control:4 unit:1 grant:1 louis:1 before:2 positive:2 local:2 treat:1 sd:1 limit:2 modify:1 jakulin:1 co:1 factorization:12 range:1 chilali:1 practical:3 unique:1 lecun:1 practice:1 implement:1 digit:3 procedure:4 saito:1 mult:5 boyd:1 word:2 induce:1 suggest:1 varadarajan:1 get:2 undesirable:1 interior:1 selection:1 put:1 applying:2 seminal:1 optimize:3 equivalent:2 function2:1 demonstrated:1 regardless:1 starting:2 odobez:1 convex:9 simplicity:2 factorisation:1 vandenberghe:1 oh:1 stability:1 exploratory:1 coordinate:2 analogous:1 updated:1 hierarchy:1 target:6 aik:1 programming:1 us:2 element:12 approximated:1 satisfying:1 updating:4 recognition:1 kxk1:1 wang:1 solved:3 wj:2 decrease:2 mentioned:1 seung:1 solving:6 technically:1 apkarian:1 division:1 efficiency:4 completely:3 blj:1 easily:1 icassp:1 k0:6 represented:1 regularizer:19 derivation:1 describe:4 effective:2 jean:2 richer:1 supplementary:4 solve:5 posed:1 larger:1 otherwise:1 reconstruct:1 amari:3 ability:1 seungjin:2 think:1 reshaped:1 differentiate:1 sequence:2 product:13 achieve:1 fittest:1 intuitive:5 kulback:2 exploiting:1 convergence:3 requirement:2 optimum:1 generating:1 converges:1 derive:1 develop:1 recurrent:1 fixing:1 measured:1 ij:14 eq:9 solves:2 auxiliary:2 implemented:2 resemble:1 implies:1 indicate:2 qd:1 lyapunov:1 direction:1 psuedo:1 fij:7 drawback:2 modifying:1 stochastic:26 nando:1 material:5 shun:2 argued:1 require:4 multilevel:1 fix:7 decompose:2 alleviate:1 im:2 hold:2 sufficiently:1 visually:1 lyu:1 m0:24 early:2 entropic:2 estimation:1 albany:3 cois:1 largest:1 vice:1 dalton:1 tool:1 weighted:1 durrieu:1 always:1 denil:1 l0:1 focus:3 evotte:1 vk:1 consistently:1 likelihood:2 hk:1 contrast:1 posteriori:1 dayan:1 dependent:1 bt:1 typically:2 relation:1 kujala:1 wij:2 pixel:1 overall:1 ill:2 ference:1 constrained:4 special:3 construct:2 having:1 eliminated:1 survive:1 icml:1 simplex:2 others:1 report:1 simplify:2 few:2 randomly:2 gamma:1 simultaneously:2 divergence:20 individual:2 national:1 argmax:3 consisting:1 sandwich:3 ab:1 mining:1 evaluation:2 deferred:1 introduces:1 mixture:1 nl:8 redistribute:1 misha:1 csiszar:1 regularizers:2 predefined:1 poorer:1 encourage:2 hoon:2 allocates:1 logarithm:3 re:1 renormalize:1 obscured:2 overcomplete:1 theoretical:1 mk:2 instance:1 column:21 modeling:1 maximization:1 introducing:1 entry:3 mk0:3 buntine:1 dir:2 synthetic:4 chooses:1 combined:1 international:2 lee:1 w1:15 rafal:2 choose:1 style:2 toy:1 converted:1 upperbound:1 de:2 attaining:1 coding:2 wk:12 bold:1 coefficient:6 summarized:1 satisfy:1 blind:2 multiplicative:12 h1:2 closed:7 start:1 recover:1 toothless:2 contribution:2 minimize:2 formed:1 gahinet:1 qk:8 correspond:4 handwritten:3 wray:1 siwei:1 sebastian:1 definition:2 naturally:1 associated:1 proof:4 popular:2 nancy:1 improves:1 actually:1 higher:1 improved:1 formulation:8 evaluated:1 though:2 shashanka:1 generality:1 furthermore:4 hand:6 propagation:1 mkl:23 xkt:3 bhiksha:1 effect:5 concept:1 true:1 normalized:2 contain:1 ccf:1 equality:1 regularization:2 symmetric:4 leibler:2 iteratively:2 dhillon:1 illustrated:1 uniquely:1 generalized:11 xkl:13 demonstrate:2 image:8 wise:2 common:2 empirically:1 volume:1 extend:1 interpretation:5 approximates:1 interpret:1 significant:2 expressing:1 versa:1 cambridge:1 rd:1 automatic:1 trivially:2 mathematics:1 similarly:1 pointed:2 similarity:2 ahn:1 closest:1 larsson:1 optimizing:1 optimizes:2 irrelevant:2 raj:1 scenario:1 verlag:1 suvrit:1 minimum:2 converge:2 dashed:1 ii:1 signal:2 multiple:1 reduces:4 technical:1 faster:2 long:1 cccp:1 involving:3 basic:1 multilayer:1 essentially:2 vision:1 poisson:2 iteration:9 normalization:5 represent:2 affecting:1 addition:1 schematically:1 decreased:1 addressed:1 source:1 w2:13 extra:3 rest:1 ascent:2 effectiveness:3 obj:4 extracting:1 bengio:1 easy:2 gillis:1 iii:1 w3:7 suboptimal:1 itr:4 haffner:1 texas:1 emonet:1 peter:1 speech:1 hessian:2 afford:4 york:1 detailed:1 amount:1 reduced:2 generate:1 affords:3 exist:2 problematic:2 xij:1 notice:1 sign:2 estimated:1 per:1 discrete:1 dropping:2 ichi:2 threshold:3 suny:1 sum:6 run:3 letter:1 uncertainty:1 throughout:1 family:1 separation:2 fran:1 scaling:2 layer:5 guaranteed:1 followed:2 fold:2 smaragdis:1 nonnegative:38 badly:1 activity:1 constraint:6 infinity:1 constrain:1 kronecker:1 x2:1 extinct:1 speed:1 min:2 emi:1 martin:1 madhusudana:1 department:2 poor:1 smaller:1 em:2 wi:2 making:1 dv:1 invariant:1 sij:2 jussi:1 describing:1 discus:1 count:1 studying:1 generalizes:1 decomposing:1 operation:3 apply:2 hierarchical:2 alternative:1 original:6 obviates:1 andrzej:2 dirichlet:21 running:3 clustering:1 const:1 music:1 especially:1 tensor:2 objective:18 font:1 parametric:1 concentration:1 diagonal:7 hoyer:1 gradient:5 subspace:1 topic:4 collected:1 unstable:1 index:1 equivalently:1 difficult:2 cij:9 robert:1 glineur:1 negative:6 rise:1 xk0:2 perform:1 imbalance:1 hurley:1 sm:21 finite:2 extended:1 incorporated:1 perturbation:1 exiting:1 nmf:24 introduced:1 paris:1 kl:18 acoustic:1 nip:3 address:1 usually:2 below:2 scott:1 sparsity:18 max:8 memory:1 video:1 suitable:1 aleks:1 natural:1 representing:2 improve:1 lk:9 cichocki:4 patrik:1 text:1 prior:4 acknowledgement:1 multiplication:1 relative:1 law:1 loss:1 mixed:2 proportional:1 allocation:1 recklessly:1 bertin:1 h2:1 foundation:1 affine:1 vectorized:3 consistent:1 thresholding:1 vij:6 row:5 austin:1 supported:1 last:5 taking:1 sparse:25 regard:1 curve:2 dimension:5 valid:1 avoids:1 rich:1 projected:2 simplified:1 tighten:1 transaction:2 approximate:3 emphasize:1 ignore:1 alpha:1 global:5 alternatively:1 grayscale:1 iterative:5 latent:3 table:2 nature:2 johan:1 rearranging:2 nicolas:1 ignoring:1 itakura:1 sra:1 bottou:1 complex:1 constructing:1 marc:1 diag:1 main:1 fair:1 x1:10 fig:6 renormalization:2 ny:1 wiley:1 embeds:1 sub:4 nonnegativity:2 third:3 down:1 choi:2 atal:1 xt:6 zdunek:4 multitude:1 survival:1 intrinsic:1 exists:3 incorporating:3 mnist:1 sequential:1 rickard:1 corr:1 conditioned:1 sparseness:1 margin:1 phan:1 entropy:2 generalizing:1 simply:3 likely:1 explore:1 expressed:1 inderjit:1 springer:2 corresponds:2 chance:1 goal:1 identity:1 formulated:1 viewed:1 sized:1 axb:11 change:1 feasible:1 specifically:5 infinite:1 typical:1 reducing:1 pgd:5 lemma:16 total:4 experimental:2 xin:1 shannon:1 meaningful:1 l4:1 jong:2 incorporate:4 evaluate:1 |
4,578 | 5,142 | Learning Adaptive Value of Information
for Structured Prediction
Ben Taskar
University of Washington
Seattle, WA
[email protected]
David Weiss
University of Pennsylvania
Philadelphia, PA
[email protected]
Abstract
Discriminative methods for learning structured models have enabled wide-spread
use of very rich feature representations. However, the computational cost of feature extraction is prohibitive for large-scale or time-sensitive applications, often
dominating the cost of inference in the models. Significant efforts have been devoted to sparsity-based model selection to decrease this cost. Such feature selection methods control computation statically and miss the opportunity to finetune feature extraction to each input at run-time. We address the key challenge
of learning to control fine-grained feature extraction adaptively, exploiting nonhomogeneity of the data. We propose an architecture that uses a rich feedback
loop between extraction and prediction. The run-time control policy is learned using efficient value-function approximation, which adaptively determines the value
of information of features at the level of individual variables for each input. We
demonstrate significant speedups over state-of-the-art methods on two challenging datasets. For articulated pose estimation in video, we achieve a more accurate
state-of-the-art model that is also faster, with similar results on an OCR task.
1
Introduction
Effective models in complex computer vision and natural language problems try to strike a favorable
balance between accuracy and speed of prediction. One source of computational cost is inference in
the model, which can be addressed with a variety of approximate inference methods. However, in
many applications, computing the scores of the constituent parts of the structured model?i.e. feature
computation?is the primary bottleneck. For example, when tracking articulated objects in video,
optical flow is a very informative feature that often requires many seconds of computation time per
frame, whereas inference for an entire sequence typically requires only fractions of a second [16];
in natural language parsing, feature computation may take up to 80% of the computation time [7].
In this work, we show that large gains in the speed/accuracy trade-off can be obtained by departing
from the traditional method of ?one-size-fits-all? model and feature selection, in which a static set
of features are computed for all inputs uniformly. Instead, we employ an adaptive approach: the
parts of the structured model are constructed specifically at test-time for each particular instance, for
example, at the level of individual video frames. There are several key distinctions of our approach:
? No generative model. One approach is to assume a joint probabilistic model of the input
and output variables and a utility function measuring payoffs. The expected value of information measures the increase in expected utility after observing a given variable [12, 8].
Unfortunately, the problem of computing optimal conditional observation plans is computationally intractable even for simple graphical models like Naive Bayes [9]. Moreover,
joint models of input and output are typically quite inferior in accuracy to discriminative
models of output given input [10, 3, 19, 1].
1
? Richly parametrized, conditional value function. The central component of our method
is an approximate value function that utilizes a set of meta-features to estimate future
changes in value of information given a predictive model and a proposed feature set as input. The critical advantage here is that the meta-features can incorporate valuable properties
beyond confidence scores from the predictive model, such as long-range input-dependent
cues that convey information about the self-consistency of a proposed output.
? Non-myopic reinforcement learning. We frame the control problem in terms of finding a feature extraction policy that sequentially adds features to the models until a budget
limit is reached, and we show how to learn approximate policies that result in accurate
structured models that are dramatically more efficient. Specifically, we learn to weigh the
meta-features for the value function using linear function approximation techniques from
reinforcement learning, where we utilize a deterministic model that can be approximately
solved with a simple and effective sampling scheme.
In summary, we provide a discriminative, practical architecture that solves the value of information
problem for structured prediction problems. We first learn a prediction model that is trained to use
subsets of features computed sparsely across the structure of the input. These feature combinations
factorize over the graph structure, and we allow for sparsely computed features such that different
vertices and edges may utilize different features of the input. We then use reinforcement learning to
estimate a value function that adaptively computes an approximately optimal set of features given a
budget constraint. Because of the particular structure of our problem, we can apply value function
estimation in a batch setting using standard least-squares solvers. Finally, we apply our method to
two sequential prediction domains: articulated human pose estimation and handwriting recognition.
In both domains, we achieve more accurate prediction models that utilize less features than the
traditional monolithic approach.
2
Related Work
There is a significant amount of prior work on the issue of controlling test-time complexity. However, much of this work has focused on the issue of feature extraction for standard classification
problems, e.g. through cascades or ensembles of classifiers that use different subsets of features at
different stages of processing. More recently, feature computation cost has been explicitly incorporated specifically into the learning procedure (e.g., [6, 14, 2, 5].) The most related recent work of this
type is [20], who define a reward function for multi-class classification with a series of increasingly
complex models, or [6], who define a feature acquisition model similar in spirit to ours, but with
a different reward function and modeling a variable trade-off rather than a fixed budget. We also
note that [4] propose explicitly modeling the value of evaluating a classifier, but their approach uses
ensembles of pre-trained models (rather than the adaptive model we propose). And while the goals
of these works are similar to ours?explicitly controlling feature computation at test time?none of the
classifier cascade literature addresses the structured prediction nor the batch setting.
Most work that addresses learning the accuracy/efficiency trade-off in a structured setting applies
primarily to inference, not feature extraction. E.g., [23] extend the idea of a classifier cascade to
the structured prediction setting, with the objective defined in terms of obtaining accurate inference
in models with large state spaces after coarse-to-fine pruning. More similar to this work, [7] incrementally prune the edge space of a parsing model using a meta-features based classifier, reducing
the total the number of features that need to be extracted. However, both of these prior efforts rely
entirely on the marginal scores of the predictive model in order to make their pruning decisions, and
do not allow future feature computations to rectify past mistakes, as in the case of our work.
Most related is the prior work of [22], in which one of an ensemble of structured models is selected
on a per-example basis. This idea is essentially a coarse sub-case of the framework presented in this
work, without the adaptive predictive model that allows for composite features that vary across the
input, without any reinforcement learning to model the future value of taking a decision (which is
critical to the success of our method), and without the local inference method proposed in Section 4.
In our experiments (Section 5), the ?Greedy (Example)? baseline is representative of the limitations
of this earlier approach.
2
INPUT
EXTRACT
FEATURES
POLICY
INFERENCE
EXTRACT
METAFEATURES
OUTPUT
Algorithm 1: Inference for x and budget B.
define an action a as a pair h? ? G, t ? {1, . . . , T }i ;
initialize B 0 ? 0, z ? 0, y ? h(x, z) ;
initialize action space (first tier) A = {(?, 1) | ? ? G};
while B 0 < B and |A| > 0 do
a ? argmaxa?A ? > ?(x, z, a);
A ? A \ a;
if ca ? (B ? B 0 ) then
z ? z + a, B 0 ? B 0 + ca , y ? h(x, z);
A ? A ? (?, t + 1);
end
end
Figure 1: Overview of our approach. (Left) A high level summary of the processing pipeline: as in standard
structured prediction, features are extracted and inference is run to produce an output. However, information
may optionally feedback in the form of extracted meta-features that are used by a control policy to determine
another set of features to be extracted. Note that we use stochastic subgradient to learn the inference model w
first and reinforcement learning to learn the control model ? given w. (Right) Detailed algorithm for factorwise inference for an example x given a graph structure G and budget B. The policy repeatedly selects the
highest valued action from an action space A that represents extracting features for each constituent part of the
graph structure G.
3
Learning Adaptive Value of Information for Structured Prediction
Setup. We consider the setting of structured prediction, in which our goal is to learn a hypothesis
mapping inputs x ? X to outputs y ? Y(x), where |x| = L and y is a L-vector of K-valued
variables, i.e. Y(x) = Y1 ?? ? ??Y` and each Yi = {1, . . . , K}. We follow the standard max-margin
structured learning approach [18] and consider linear predictive models of the form w> f (x, y).
However, we introduce an additional explicit feature extraction state vector z:
h(x, z) = argmax w> f (x, y, z).
(1)
y?Y(x)
Above, f (x, y, z) is a sparse vector of D features that takes time c> z to compute for a non-negative
cost vector c and binary indicator vector z of length |z| = F . Intuitively, z indicates which of F sets
of features are extracted when computing f ; z = 1 means every possible feature is extracted, while
z = 0 means that only a minimum set of features is extracted.
Note that by incorporating z into the feature function, the predictor h can learn to use different linear
weights for the same underlying feature value by conditioning the feature on the value of z. As we
discuss in Section 5, adapting the weights in this way is crucial to building a predictor h that works
well for any subset of features. We will discuss how to construct such features in more detail in
Section 4.
Suppose we have learned such a model h. At test time, our goal is to make the most accurate
predictions possible for an example under a fixed budget B. Specifically, given h and a loss function
` : Y ? Y 7? R+ , we wish to find the following:
H(x, B) = argmin Ey|x [`(y, h(x, z))]
(2)
z
In practice, there are three primary difficulties in optimizing equation (2). First, the distribution
P (Y |X) is unknown. Second, there are exponentially many zs to explore. Most important, however, is the fact that we do not have free access to the objective function. Instead, given x, we are
optimizing over z using a function oracle since we cannot compute f (x, y, z) without paying c> z,
and the total cost of all the calls to the oracle must not exceed B. Our approach to solving these
problems is outlined in Figure 1; we learn a control model (i.e. a policy) by posing the optimization
problem as an MDP and using reinforcement learning techniques.
Adaptive extraction MDP. We model the budgeted prediction optimization as the following Markov
Decision Process. The state of the MDP s is the tuple (x, z) for an input x and feature extraction
3
state z (for brevity we will simply write s). The start state is s0 = (x, 0), with x ? P (X), and
z = 0 indicating only a minimal set of features have been extracted. The action space A(s) is
{i | zi = 0} ? {0}, where zi is the i?the element of z; given a state-action pair (s, a), the next state is
deterministically s0 = (x, z + ea ), where ea is the indicator vector with a 1 in the a?th component or
the zero vector if a = 0. Thus, at each state we can choose to extract one additional set of features,
or none at all (at which point the process terminates.) Finally, for fixed h, we define the shorthand
?(s) = Ey|x `(y, h(x, z)) to be the expected error of the predictor h given state z and input x.
We now define the expected reward to be the adaptive value of information of extracting the a?th set
of features given the system state and budget B:
?(s) ? ?(s0 ) if c> z(s0 ) ? B
0
(3)
R(s, a, s ) =
0
otherwise
Intuitively, (3) says that each time we add additional features to the computation, we gain reward
equal to the decrease in error achieved with the new features (or pay a penalty if the error increases.)
However, if we ever exceed the budget, then any further decrease does not count; no more reward
can be gained. Furthermore, assuming f (x, y, z) can be cached appropriately, it is clear that we pay
only the additional computational cost ca for each action a, so the entire cumulative computational
burden of reaching some state s is exactly c> z for the corresponding z vector.
Given a trajectory of states s0 , s1 , . . . , sT , computed by some deterministic policy ?, it is clear that
the final cumulative reward R? (s0 ) is the difference between the starting error rate and the rate of
the last state satisfying the budget:
R? (s0 ) = ?(s0 ) ? ?(s1 ) + ?(s1 ) ? ? ? ? = ?(s0 ) ? ?(st? ),
(4)
where t? is the index of the final state within the budget constraint. Therefore, the optimal policy
? ? that maximizes expected reward will compute z? minimizing (2) while satisfying the budget
constraint.
Learning an approximate policy with long-range meta-features. In this work, we focus on a
straightforward method for learning an approximate policy: a batch version of least-squares policy
iteration [11] based on Q-learning [21]. We parametrize the policy using a linear function of metafeatures ? computed from the current state s = (x, z): ?? (s) = argmaxa ? > ?(x, z, a). The metafeatures (which we abbreviate as simply ?(s, a) henceforth) need to be rich enough to represent
the value of choosing to expand feature a for a given partially-computed example (x, z). Note that
we already have computed f (x, h(x, z), z), which may be useful in estimating the confidence of
the model on a given example. However, we have much more freedom in choosing ?(s, a) than
we had in choosing f ; while f is restricted to ensure that inference is tractable, we have no such
restriction for ?. We therefore compute functions of h(x, z) that take into account large sets of
output variables, and since we need only compute them for the particular output h(x, z), we can
do so very efficiently. We describe the specific ? we use in our experiments in Section 5, typically
measuring the self-consistency of the output as a surrogate for the expected accuracy.
One-step off-policy Q-learning with least-squares. To simplify the notation, we will assume given
current state s, taking action a deterministically yields state s0 . Given a policy ?, the value of a policy
is recursively defined as the immediate expected reward plus the discounted value of the next state:
Q? (s, a) = R(s, a, s0 ) + ?Q? (s0 , ?(s0 )).
(5)
?
The goal of Q-learning is to learn the Q for the optimal policy ? with maximal Q?? ; however, it is
clear that we can increase Q by simply stopping early when Q? (s, a) < 0 (the future reward in this
case is simply zero.) Therefore, we define the off-policy optimized value Q?? as follows:
Q?? (st , ?(st )) = R(st , ?(st ), st+1 ) + ? [Q?? (st+1 , ?(st+1 ))]+ .
(6)
We propose the following one-step algorithm for learning Q from data. Suppose we have a finite
trajectory s0 , . . . , sT . Because both ? and the state transitions are deterministic, we can unroll the
recursion in (6) and compute Q?? (st , ?(st )) for each sample using simple dynamic programming.
For example, if ? = 1 (there is no discount for future reward), we obtain Q?? (si , ?(si )) = ?(si ) ?
?(st? ), where t? is the optimal stopping time that satisfies the given budget.
We therefore learn parameters ? ? for an approximate Q as follows. Given an initial policy ?, we
execute ? for each example (xj , yj ) to obtain trajectories sj0 , . . . , sjT . We then solve the following
4
least-squares optimization,
? ? = argmin ?||?||2 +
?
2
1 X >
? ?(sjt , ?(sjt )) ? Q?? (sjt , ?(sjt )) ,
nT j,t
(7)
using cross validation to determine the regularization parameter ?.
We perform a simple form of policy iteration as follows. We first initialize ? by estimating the
expected reward function (this can be estimated by randomly sampling pairs (s, s0 ), which are more
efficient to compute than Q-functions on trajectories). We then compute trajectories under ?? and
use these trajectories to compute ? ? that approximates Q?? . We found that additional iterations of
policy iteration did not noticeably change the results.
Learning for multiple budgets. One potential drawback of our approach just described is that we
must learn a different policy for every desired budget. A more attractive alternative is to learn a
single policy that is tuned to a range of possible budgets. One solution is to set ? = 1 and learn
with B = ?, so that the value Q?? represents the best improvement possible using some optimal
budget B ? ; however, at test time, it may be that B ? is greater than the available budget B and Q?? is
an over-estimate. By choosing ? < 1, we can trade-off between valuing reward for short-term gain
with smaller budgets B < B ? and longer-term gain with the unknown optimal budget B ? .
In fact, we can further encourage our learned policy to be useful for smaller budgets by adjusting
the reward function. Note that two trajectories that start at s0 and end at st? will have the same
reward, yet one trajectory might maintain much lower error rate than the other during the process
and therefore be more useful for smaller budgets. We therefore add a shaping component to the
expected reward in order to favor the more useful trajectory as follows:
R? (s, a, s0 ) = ?(s) ? ?(s0 ) ? ? [?(s0 ) ? ?(s)]+ .
(8)
This modification introduces a term that does not cancel when transitioning from one state to the
next, if the next state has higher error than our current state. Thus, we can only achieve optimal
reward ?(s0 ) ? ?(st? ) when there is a sequence of feature extractions that never increases the error
rate1 ; if such a sequence does not exist, then the parameter ? controls the trade-off between the
importance of reaching st? and minimizing any errors along the way. Note that we can still use the
procedure described above to learn ? when using R? instead of R. We use a development set to
tune ? as well as ? to find the most useful policy when sweeping B across a range of budgets.
Batch mode inference. At test time, we are typically given a test set of m examples, rather than
a single example. In this setting the budget applies to the entire inference process, and it may be
useful to spend more of the budget on difficult examples rather than allocate the budget evenly across
all examples. In this case, we extend our framework to concatenate the states of all m examples
s = (x1 , . . . , xm , z1 , . . . , zm ). The action consists of choosing an example and then choosing
an action within that example?s sub-state; our policy searches over the space of all actions for all
examples simultaneously. Because of this, we impose additional constraints on the action space,
specifically:
z(a, . . . ) = 1 =? z(a0 , . . . ) = 1, ?a0 < a.
(9)
Equation (9) states that there is an inherent ordering of feature extractions, such that we cannot
compute the a?th feature set without first computing feature sets 1, . . . , a ? 1. This greatly simplifies
the search space in the batch setting while at the same time preserving enough flexibility to yield
significant improvements in efficiency.
Baselines. We compare to two baselines: a simply entropy-based approach and a more complex
imitation learning scheme (inspired by [7]) in which we learn a classifier to reproduce a target
policy given by an oracle. The entropy-based approach simply computes probabilistic marginals
and extracts features for whichever portion of the output space has highest entropy in the predicted
distribution. For the imitation learning model, we use the same trajectories used to learn Q?? , but
instead we create a classification dataset of positive and negative examples given a budget B by
assigning all state/action pairs along a trajectory within the budget as positive examples and all
budget violations as negative examples. We tune the budget B using a development set to optimize
the overall trade-off when the policy is evaluated with multiple budgets.
1
While adding features decreases training error on average, even on the training set additional features may
lead to increased error for any particular example.
5
Feature
Tier (T )
4
3
2
1
Best
Error (%)
44.07
46.17
46.98
51.49
43.45
Time (s)
Entropy Q-Learn
16.20s
8.91s
8.10s
5.51s
6.80s
4.86s
?
?
?
13.45s
Fixed
16.20s
12.00s
5.50s
2.75s
?
Table 1: Trade-off between average elbow and wrist error rate and total runtime time achieved by our method
on the pose dataset; each row fixes an error rate and determines the amount of time required by each method
to achieve the error. Unlike using entropy-based confidence scores, our Q-learning approach always improves
runtime over a priori selection and even yields a faster model that is also more accurate (final row).
4
Design of the information-adaptive predictor h
Learning. We now address the problem of learning h(x, z) from n labeled data points {(xj , yj }nj=1 .
Since we do not necessarily know the test-time budget during training (nor would we want to repeat
the training process for every possible budget), we formulate the problem of minimizing the expected
training loss according to a uniform distribution over budgets:
n
w? = argmin ?||w||2 +
w
1X
Ez [`(yj , h(xj , z)].
n j=1
(10)
Note that if ` is convex, then (10) is a weighted sum of convex functions and is also convex. Our
choice of distribution for z will determine how the predictor h is calibrated. In our experiments, we
sampled z?s uniformly at random. To learn w, we use Pegasos-style [17] stochastic sub-gradient descent; we approximate the expectation in (10) by resampling z every time we pick up a new example
(xj , yj ). We set ? and a stopping-time criterion through cross-validation onto a development set.
Feature design. We now turn to the question of designing f (x, y, z). In the standard pair-wise
graphical model setting (before considering z), we decompose a feature function f (x, y) into unary
and pairwise features. We consider several different schemes of incorporating z of varying complexity. The simplest scheme is to use several different feature functions f 1 , . . . , f F . Then |z| = F ,
and za = 1 indicates that f a is computed. Thus, we have the following expression, where we use
z(a) to indicate the a?th element of z:
?
?
F
X
X
X
f (x, y, z) =
z(a) ?
fua (x, yi ) +
fea (x, yi , yj )?
(11)
a=1
i?V
(i,j)?E
0
Note that in practice we can choose each f to be a sparse vector such that f a ? f a = 0 for all a0 6= a;
that is, each feature function f a ?fills out? a complementary section of the feature vector f .
a
A much more powerful approach is to create a feature vector as the composite of different extracted
features for each vertex and edge in the model. In this setting, we set z = [zu ze ], where |z| =
(|V| + |E|)F , and we have
f (x, y, z) =
F
XX
zu (a, i)fua (x, yi ) +
i?V a=1
F
X X
ze (a, ij)fea (x, yi , yj ).
(12)
(i,j)?E a=1
We refer to this latter feature extraction method a factor-level feature extraction, and the former as
example-level.2
Reducing inference overhead. Feature computation time is only one component of the computational cost in making predictions; computing the argmax (1) given f can also be expensive. Note
2
The restriction (9) also allows us to increase the complexity of the feature function f as follows; when
using the a?th extraction, we allow the model to re-weight the features from extractions 1 through a. In other
words, we condition the value of the feature on the current set of features that have been computed; since
there are only F sets in the restricted setting (and not 2F ), this is a feasible option. We simply define ?f a =
[0 . . . f 1 . . . f a . . . 0], where we add duplicates of features f 1 through f a for each feature block a. Thus,
the model can learn different weights for the same underlying features based on the current level of feature
extraction; we found that this was crucial for optimal performance.
6
Accuracy gained per Computation (Elbow)
8
7
7
6
6
? Accuracy (Elbow)
? Accuracy (Wrist)
Accuracy gained per Computation (Wrist)
8
5
4
3
4
3
2
2
Forward Selection
Entropy (Factor)
Greedy (Example)
Greedy (Factor)
Imitation
Q?Learning
1
0
5
2
4
6
8
10
Total runtime (s)
12
14
Forward Selection
Entropy (Factor)
Greedy (Example)
Greedy (Factor)
Imitation
Q?Learning
1
0
16
2
4
6
8
10
Total runtime (s)
12
14
16
Figure 2: Trade-off performance on the pose dataset for wrists (left) and elbows (right). The curve shows
the increase in accuracy over the minimal-feature model as a function of total runtime per frame (including
all overhead). We compare to two baselines that involve no learning: forward selection and extracting factorwise features based on the entropy of marginals at each position (?Entropy?). The learned policy results are
either greedy (?Greedy? example-level and factor-level) or non-myopic (either our ?Q-learning? or the baseline
?Imitation?). Note that the example-wise method is far less effective than the factor-wise extraction strategy.
Furthermore, Q-learning in particular achieves higher accuracy models at a fraction of the computational cost
of using all features, and is more effective than imitation learning.
that for reasons of simplicity, we only consider low tree-width models in this work for which (1) can
be efficiently solved via a standard max-sum message-passing algorithm. Nonetheless, since ?(s, a)
requires access to h(x, z) then we must run message-passing every time we compute a new state s
in order to compute the next action. Therefore, we run message passing once and then perform less
expensive local updates using saved messages from the previous iteration. We define an simple algorithm for such quiescent inference (given in the Supplemental material); we refer to this inference
scheme as q-inference. The intuition is that we stop propagating messages once the magnitude of
the update to the max-marginal decreases below a certain threshold q; we define q in terms of the
margin of the current MAP decoding at the given position, since that margin must be surpassed if
the MAP decoding will change as a result of inference.
5
5.1
Experiments
Tracking of human pose in video
Setup. For this problem, our goal is to predict the joint locations of human limbs in video clips
extracted from Hollywood movies. Our testbed is the MODEC+S model proposed in [22]; the
MODEC+S model uses the MODEC model of [15] to generate 32 proposed poses per frame of a
video sequence, and then combines the predictions using a linear-chain structured sequential prediction model. There are four types of features used by MODEC+S, the final and most expensive
of which is a coarse-to-fine optical flow [13]; we incrementally compute poses and features to minimize the total runtime. For more details on the dataset/features, see [22]. We present cross validation
results averaged over 40 80/20 train/test splits of the dataset. We measure localization performance
or elbow and wrists in terms of percentage of times the predicted locations fall within 20 pixels of
the ground truth.
Meta-features. We define the meta-features ?(s, a) in terms of the targeted position in the sequence
i and the current predictions y? = h(x, z). Specifically, we concatenate the already computed unary
and edge features of yi? and its neighbors (conditioned on the value of z at i), the margin of the
current MAP decoding at position i, and a measure of self-consistency computed on y? as follows.
For all sets of m frames overlapping with frame i, we extract color histograms for the predicted
arm segments and compute the maximum ?2 -distance from the first frame to any other frame; we
then also add an indicator feature each of these maximum distances exceeds 0.5, and repeat for
m = 2, . . . , 5. We also add several bias terms for which sets of features have been extracted around
position i.
7
OCR Trade?off (Feature+Overhead)
9
8
8
7
7
6
6
Improvement (%)
Improvement (%)
OCR Trade?off (Feature Only)
9
5
4
4
3
3
2
2
Single Tier
Greedy (Example)
Q?learning (q = 0)
Q?learning (q = 0.1)
Q?learning (q = 0.5)
1
0
5
0
0.2
0.4
0.6
0.8
1
1.2
Additional Feature Cost (s)
1.4
1.6
Single Tier
Greedy (Example)
Q?learning (q = 0)
Q?learning (q = 0.1)
Q?learning (q = 0.5)
1
1.8
0
0
0.5
1
1.5
2
2.5
3
Additional Total Cost (s)
3.5
4
4.5
5
Figure 3: Controlling overhead on the OCR dataset. While our approach is is extremely efficient in terms of
how many features are extracted (Left), the additional overhead of inference is prohibitively expensive for the
OCR task without applying q-inference (Right) with a large threshold. Furthermore, although the example-wise
strategy is less efficient in terms of features extracted, it is more efficient in terms of overhead.
Discussion. We present a short summary of our pose results in Table 1, and compare to various
baselines in Figure 2. We found that our Q-learning approach is consistently more effective than all
baselines; Q-learning yields a model that is both more accurate and faster than the baseline model
trained with all features. Furthermore, while the feature extraction decisions of the Q-learning model
are significantly correlated with the error of the starting predictions (? = 0.23), the entropy-based
are not (? = 0.02), indicating that our learned reward signal is much more informative.
5.2
Handwriting recognition
Setup. For this problem, we use the OCR dataset from [19], which is pre-divided into 10 folds that
we use for cross validation. We use three sets of features: the original pixels (free), and two sets
of Histogram-of-Gradient (HoG) features computed on the images for different bin sizes. Unlike
the pose setting, the features are very fast to compute compared to inference. Thus, we evaluate the
effectiveness of q-inference with various thresholds to minimize inference time. For meta-features,
we use the same construction as for pose, but instead of inter-frame ?2 -distance we use a binary
indicator as to whether or not the specific m-gram occurred in the training set. The results are
summarized in Figure 3; see caption for details.
Discussion. Our method is extremely efficient in terms of the features computed for h; however,
unlike the pose setting, the overhead of inference is on par with the feature computation. Thus, we
obtain a more accurate model with q = 0.5 that is 1.5? faster, even though it uses only 1/5 of the
features; if the implementation of inference were improved, we would expect a speedup much closer
to 5?.
6
Conclusion
We have introduced a framework for learning feature extraction policies and predictive models that
adaptively select features for extraction in a factor-wise, on-line fashion. On two tasks our approach
yields models that both more accurate and far more efficient; our work is a significant step towards
eliminating the feature extraction bottleneck in structured prediction. In the future, we intend to
extend this approach to apply to loopy model structures where inference is intractable, and more
importantly, to allow for features that change the structure of the underlying graph, so that the graph
structure can adapt to both the complexity of the input and the test-time computational budget.
Acknowledgements. The authors were partially supported by ONR MURI N000141010934, NSF
CAREER 1054215, and by STARnet, a Semiconductor Research Corporation program sponsored
by MARCO and DARPA.
8
References
[1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In Proc. ICML,
2003.
[2] M. Chen, Z. Xu, K.Q. Weinberg, O. Chapelle, and D. Kedem. Classifier cascade for minimizing feature
evaluation cost. In AISATATS, 2012.
[3] M. Collins. Discriminative training methods for hidden markov models: theory and experiments with
perceptron algorithms. In Proc. EMNLP, 2002.
[4] T. Gao and D. Koller. Active classification based on value of classifier. In NIPS, 2011.
[5] A. Grubb and D. Bagnell. Speedboost: Anytime prediction with uniform near-optimality. In AISTATS,
2012.
[6] H. He, H. Daum?e III, and J. Eisner. Imitation learning by coaching. In NIPS, 2012.
[7] H. He, H. Daum?e III, and J. Eisner. Dynamic feature selection for dependency parsing. In EMNLP, 2013.
[8] R. A Howard. Information value theory. Systems Science and Cybernetics, IEEE Transactions on,
2(1):22?26, 1966.
[9] Andreas Krause and Carlos Guestrin. Optimal value of information in graphical models. Journal of
Artificial Intelligence Research (JAIR), 35:557?591, 2009.
[10] J.D. Lafferty, A. McCallum, and F.C.N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. ICML, 2001.
[11] M. Lagoudakis and R. Parr. Least-squares policy iteration. JMLR, 2003.
[12] Dennis V Lindley. On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, pages 986?1005, 1956.
[13] C. Liu. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. PhD thesis,
MIT, 2009.
[14] V.C. Raykar, B. Krishnapuram, and S. Yu. Designing efficient cascaded classifiers: tradeoff between
accuracy and cost. In SIGKDD, 2010.
[15] B. Sapp and B. Taskar. MODEC: Multimodal decomposable models for human pose estimation. In
CVPR, 2013.
[16] B. Sapp, D. Weiss, and B. Taskar. Parsing human motion with stretchable models. In CVPR, 2011.
[17] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In
ICML, 2007.
[18] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large
margin approach. In ICML, 2005.
[19] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
[20] K. Trapeznikov and V. Saligrama. Supervised sequential classification under budget constraints. In
AISTATS, 2013.
[21] C. Watkins and P. Dayan. Q-learning. Machine learning, 1992.
[22] D. Weiss, B. Sapp, and B. Taskar. Dynamic structured model selection. In ICCV, 2013.
[23] D. Weiss and B. Taskar. Structured prediction cascades. In AISTATS, 2010.
9
| 5142 |@word version:1 eliminating:1 pick:1 recursively:1 initial:1 liu:1 series:1 score:4 tuned:1 ours:2 past:1 current:8 nt:1 si:3 yet:1 assigning:1 must:4 parsing:4 concatenate:2 informative:2 hofmann:1 sponsored:1 update:2 resampling:1 generative:1 prohibitive:1 cue:1 selected:1 greedy:9 intelligence:1 mccallum:1 short:2 coarse:3 location:2 mathematical:1 along:2 constructed:1 shorthand:1 consists:1 overhead:7 combine:1 introduce:1 pairwise:1 inter:1 upenn:1 expected:10 nor:2 multi:1 inspired:1 discounted:1 solver:2 elbow:5 considering:1 provided:1 estimating:2 moreover:1 underlying:3 maximizes:1 notation:1 xx:1 argmin:3 z:1 supplemental:1 finding:1 corporation:1 nj:1 every:5 runtime:6 exactly:1 prohibitively:1 classifier:9 control:8 segmenting:1 positive:2 before:1 monolithic:1 local:2 limit:1 mistake:1 semiconductor:1 approximately:2 might:1 plus:1 challenging:1 range:4 averaged:1 practical:1 yj:6 wrist:5 practice:2 block:1 procedure:2 cascade:5 composite:2 adapting:1 significantly:1 confidence:3 pre:2 word:1 argmaxa:2 krishnapuram:1 altun:1 cannot:2 pegasos:2 selection:9 onto:1 tsochantaridis:1 applying:1 restriction:2 optimize:1 deterministic:3 map:3 straightforward:1 starting:2 convex:3 focused:1 formulate:1 simplicity:1 decomposable:1 importantly:1 fill:1 enabled:1 annals:1 controlling:3 suppose:2 target:1 construction:1 caption:1 programming:1 us:4 designing:2 hypothesis:1 pa:1 element:2 recognition:2 satisfying:2 ze:2 expensive:4 sparsely:2 muri:1 labeled:1 taskar:8 kedem:1 solved:2 ordering:1 decrease:5 trade:10 highest:2 valuable:1 weigh:1 intuition:1 complexity:4 reward:17 dynamic:3 trained:3 solving:1 segment:1 predictive:6 localization:1 efficiency:2 basis:1 multimodal:1 joint:3 darpa:1 various:2 articulated:3 train:1 fast:1 effective:5 describe:1 artificial:1 labeling:1 choosing:6 shalev:1 quite:1 spend:1 dominating:1 valued:2 say:1 solve:1 otherwise:1 cvpr:2 favor:1 statistic:1 sj0:1 final:4 sequence:6 advantage:1 propose:4 maximal:1 fea:2 zm:1 saligrama:1 loop:1 flexibility:1 achieve:4 constituent:2 seattle:1 exploiting:1 produce:1 cached:1 ben:1 object:1 propagating:1 pose:12 ij:1 paying:1 solves:1 c:1 predicted:3 indicate:1 drawback:1 saved:1 stochastic:2 human:5 material:1 noticeably:1 bin:1 fix:1 decompose:1 exploring:1 marco:1 around:1 ground:1 trapeznikov:1 mapping:1 predict:1 parr:1 vary:1 early:1 achieves:1 estimation:4 favorable:1 proc:3 sensitive:1 hollywood:1 create:2 weighted:1 mit:1 always:1 rather:4 reaching:2 varying:1 coaching:1 focus:1 improvement:4 consistently:1 indicates:2 greatly:1 sigkdd:1 baseline:8 inference:28 dependent:1 stopping:3 dayan:1 unary:2 chatalbashev:1 entire:3 typically:4 a0:3 hidden:2 koller:3 expand:1 reproduce:1 selects:1 pixel:3 issue:2 classification:5 overall:1 priori:1 development:3 plan:1 art:2 initialize:3 marginal:2 field:1 equal:1 construct:1 extraction:22 washington:2 sampling:2 once:2 never:1 represents:2 yu:1 cancel:1 icml:4 future:6 simplify:1 rate1:1 employ:1 primarily:1 inherent:1 randomly:1 duplicate:1 simultaneously:1 individual:2 argmax:2 maintain:1 n000141010934:1 freedom:1 message:5 evaluation:1 introduces:1 violation:1 primal:1 myopic:2 devoted:1 chain:1 accurate:9 edge:4 tuple:1 encourage:1 closer:1 tree:1 desired:1 re:1 minimal:2 increased:1 instance:1 modeling:2 earlier:1 measuring:2 loopy:1 cost:14 vertex:2 subset:3 predictor:5 uniform:2 dependency:1 calibrated:1 adaptively:4 st:16 probabilistic:3 off:12 decoding:3 thesis:1 central:1 choose:2 emnlp:2 henceforth:1 style:1 account:1 potential:1 summarized:1 explicitly:3 grubb:1 try:1 observing:1 reached:1 start:2 bayes:1 portion:1 option:1 carlos:1 lindley:1 minimize:2 square:5 accuracy:12 who:2 efficiently:2 ensemble:3 yield:5 none:2 trajectory:11 cybernetics:1 za:1 nonetheless:1 acquisition:1 static:1 handwriting:2 gain:4 sampled:1 dataset:7 richly:1 adjusting:1 stop:1 color:1 anytime:1 improves:1 sapp:3 shaping:1 starnet:1 ea:2 finetune:1 higher:2 jair:1 supervised:1 follow:1 wei:4 improved:1 fua:2 execute:1 evaluated:1 though:1 furthermore:4 just:1 stage:1 until:1 dennis:1 overlapping:1 incrementally:2 mode:1 mdp:3 building:1 unroll:1 regularization:1 former:1 attractive:1 during:2 self:3 width:1 inferior:1 raykar:1 criterion:1 demonstrate:1 motion:2 image:1 wise:5 recently:1 lagoudakis:1 overview:1 conditioning:1 exponentially:1 extend:3 occurred:1 approximates:1 he:2 marginals:2 significant:5 refer:2 consistency:3 outlined:1 language:2 had:1 rectify:1 chapelle:1 access:2 longer:1 add:6 recent:1 optimizing:2 certain:1 meta:9 binary:2 success:1 onr:1 yi:6 sjt:5 preserving:1 minimum:1 additional:10 greater:1 impose:1 guestrin:3 ey:2 prune:1 determine:3 strike:1 signal:1 multiple:2 exceeds:1 faster:4 adapt:1 cross:4 long:2 divided:1 prediction:23 vision:1 essentially:1 expectation:1 surpassed:1 iteration:6 represent:1 histogram:2 achieved:2 whereas:1 want:1 fine:3 krause:1 addressed:1 source:1 crucial:2 appropriately:1 unlike:3 flow:2 spirit:1 effectiveness:1 lafferty:1 call:1 extracting:3 near:1 exceed:2 split:1 enough:2 iii:2 variety:1 xj:4 fit:1 zi:2 pennsylvania:1 architecture:2 andreas:1 idea:2 simplifies:1 tradeoff:1 bottleneck:2 whether:1 expression:1 allocate:1 utility:2 effort:2 penalty:1 passing:3 action:14 repeatedly:1 dramatically:1 useful:6 detailed:1 clear:3 tune:2 involve:1 amount:2 discount:1 clip:1 simplest:1 generate:1 exist:1 percentage:1 nsf:1 estimated:2 per:6 write:1 key:2 four:1 threshold:3 budgeted:1 utilize:3 graph:5 subgradient:1 fraction:2 sum:2 run:5 powerful:1 utilizes:1 decision:4 entirely:1 pay:2 fold:1 oracle:3 constraint:5 speed:2 extremely:2 optimality:1 statically:1 optical:2 speedup:2 structured:19 according:1 combination:1 across:4 terminates:1 increasingly:1 smaller:3 modification:1 s1:3 making:1 intuitively:2 restricted:2 iccv:1 pipeline:1 tier:4 computationally:1 equation:2 discus:2 count:1 turn:1 singer:1 know:1 tractable:1 whichever:1 end:3 parametrize:1 available:1 apply:3 limb:1 ocr:6 batch:5 alternative:1 original:1 ensure:1 graphical:3 opportunity:1 daum:2 eisner:2 objective:2 intend:1 already:2 question:1 strategy:2 primary:2 traditional:2 surrogate:1 bagnell:1 gradient:3 distance:3 parametrized:1 evenly:1 reason:1 assuming:1 length:1 index:1 balance:1 minimizing:4 modec:5 optionally:1 setup:3 unfortunately:1 difficult:1 weinberg:1 hog:1 negative:3 design:2 implementation:1 policy:31 unknown:2 perform:2 observation:1 datasets:1 markov:4 howard:1 finite:1 descent:1 immediate:1 payoff:1 incorporated:1 ever:1 frame:10 y1:1 sweeping:1 david:1 introduced:1 pair:5 required:1 optimized:1 z1:1 learned:5 distinction:1 testbed:1 nip:3 address:4 beyond:2 below:1 xm:1 factorwise:2 sparsity:1 challenge:1 program:1 max:4 including:1 video:6 critical:2 natural:2 rely:1 difficulty:1 indicator:4 abbreviate:1 recursion:1 cascaded:1 arm:1 scheme:5 movie:1 naive:1 extract:5 philadelphia:1 prior:3 literature:1 acknowledgement:1 loss:2 par:1 expect:1 limitation:1 nonhomogeneity:1 srebro:1 validation:4 s0:20 row:2 summary:3 repeat:2 last:1 free:2 supported:1 bias:1 allow:4 perceptron:1 wide:1 fall:1 taking:2 neighbor:1 sparse:2 departing:1 feedback:2 curve:1 evaluating:1 cumulative:2 rich:3 computes:2 transition:1 forward:3 gram:1 adaptive:8 reinforcement:6 author:1 far:2 transaction:1 approximate:7 pruning:2 sequentially:1 active:1 quiescent:1 discriminative:4 factorize:1 imitation:7 shwartz:1 search:2 table:2 learn:19 ca:3 career:1 obtaining:1 posing:1 complex:3 necessarily:1 domain:2 did:1 aistats:3 spread:1 complementary:1 convey:1 x1:1 xu:1 representative:1 fashion:1 sub:4 position:5 pereira:1 explicit:1 wish:1 deterministically:2 jmlr:1 watkins:1 grained:1 transitioning:1 specific:2 zu:2 svm:1 intractable:2 incorporating:2 burden:1 sequential:3 adding:1 gained:3 ci:1 importance:1 phd:1 magnitude:1 budget:35 conditioned:1 margin:6 chen:1 valuing:1 entropy:10 simply:7 explore:1 gao:1 ez:1 tracking:2 partially:2 applies:2 truth:1 determines:2 satisfies:1 extracted:13 conditional:3 goal:5 targeted:1 towards:1 feasible:1 change:4 specifically:6 uniformly:2 reducing:2 miss:1 total:8 indicating:2 select:1 metafeatures:3 support:1 latter:1 collins:1 brevity:1 incorporate:1 evaluate:1 correlated:1 |
4,579 | 5,143 | Symbolic Opportunistic Policy Iteration for
Factored-Action MDPs
Aswin Raghavana Roni Khardonb Alan Ferna Prasad Tadepallia
a
School of EECS, Oregon State University, Corvallis, OR, USA
{nadamuna,afern,tadepall}@eecs.orst.edu
b
Department of Computer Science, Tufts University, Medford, MA, USA
[email protected]
Abstract
This paper addresses the scalability of symbolic planning under uncertainty with
factored states and actions. Our first contribution is a symbolic implementation
of Modified Policy Iteration (MPI) for factored actions that views policy evaluation as policy-constrained value iteration (VI). Unfortunately, a na??ve approach
to enforce policy constraints can lead to large memory requirements, sometimes
making symbolic MPI worse than VI. We address this through our second and
main contribution, symbolic Opportunistic Policy Iteration (OPI), which is a novel
convergent algorithm lying between VI and MPI, that applies policy constraints
if it does not increase the size of the value function representation, and otherwise
performs VI backups. We also give a memory bounded version of this algorithm
allowing a space-time tradeoff. Empirical results show significantly improved
scalability over state-of-the-art symbolic planners.
1
Introduction
We study symbolic dynamic programming (SDP) for Markov Decision Processes (MDPs) with exponentially large factored state and action spaces. Most prior SDP work has focused on exact [1]
and approximate [2, 3] solutions to MDPs with factored states, assuming just a handful of atomic
actions. In contrast to this, many applications are most naturally modeled as having factored actions
described in terms of multiple action variables, which yields an exponential number of joint actions.
This occurs, e.g., when controlling multiple actuators in parallel, such as in robotics, traffic control,
and real-time strategy games. In recent work [4] we have extended SDP to factored actions by giving
a symbolic VI algorithm that explicitly reasons about action variables. The key bottleneck of that
approach is the space and time complexity of computing symbolic Bellman backups, which requires
reasoning about all actions at all states simultaneously. This paper is motivated by addressing this
bottleneck via the introduction of alternative and potentially much cheaper backups.
We start by considering Modified Policy Iteration (MPI) [5], which adds a few policy evaluation
steps between consecutive Bellman backups. MPI is attractive for factored-action spaces because
policy evaluation does not require reasoning about all actions at all states, but rather only about the
current policy?s action at each state. Existing work on symbolic MPI [6] assumes a small atomic
action space and does not scale to factored actions. Our first contribution (Section 3) is a new
algorithm, Factored Action MPI (FA-MPI), that conducts exact policy evaluation steps by treating
the policy as a constraint on normal Bellman backups.
While FA-MPI is shown to improve scalability compared to VI in some cases, we observed that in
practice the strict enforcement of the policy constraint can cause the representation of value functions
to become too large and dominate run time. Our second and main contribution (Section 4) is to
overcome this issue using a new backup operator that lies between policy evaluation and a Bellman
1
Figure 1: Example of a DBN MDP with factored actions.
backup and hence is guaranteed to converge. This new algorithm, Opportunistic Policy Iteration
(OPI), constrains a select subset of the actions in a way that guarantees that there is no growth
in the representation of the value function. We also give a memory-bounded version of the above
algorithm (Section 5). Our empirical results (Section 6) show that these algorithms are significantly
more scalable than FA-MPI and other state-of-the-art algorithms.
2
MDPs with Factored State and Action Spaces
In a factored MDP M , the state space S and action space A are specified by finite sets of binary
variables X = (X1 , . . . , Xl ) and A = (A1 , . . . , Am ) respectively, so that |S| = 2l and |A| = 2m .
For emphasis we refer to such MDPs as factored-action MDPs (FA-MDPs). The transition function
T and reward function R are specified compactly using a Dynamic Bayesian Network (DBN). The
DBN model consists of a two?time-step graphical model that shows, for each next state variable
X 0 and the immediate reward, the set of current state and action variables, denoted by parents(X 0 ).
Further, following [1], the conditional probability functions are represented by algebraic decision
diagrams (ADDs) [7], which represent real-valued functions of boolean variables as a Directed
Acyclic Graph (DAG) (i.e., an ADD maps assignments to n boolean variables to real values). We
0
let P Xi denote the ADD representing the conditional probability table for variable Xi0 .
For example, Figure 1 shows a DBN for the SysAdmin domain (Section 6.1). The DBN encodes
that the computers c1, c2 and c3 are arranged in a directed ring so that the running status of each is
influenced by its reboot action and the status of its predecessor. The right part of Figure 1 shows the
ADD representing the dynamics for the state variable running c1. The variable running c1?
represents the truth value of running c1 in the next state. The ADD shows that running c1
becomes true if it is rebooted, and otherwise the next state depends on the status of the neighbors.
When not rebooted, c1 fails w.p. 0.3 if its neighboring computer c3 has also failed, and w.p. 0.05
otherwise. When not rebooted, a failed computer becomes operational w.p. 0.05.
ADDs support binary operations over the functions they represent (F op G = H if and only if
?x, F (x) op G(x) = H(x)) and marginalization operators
P (e.g., marginalize x via maximization in
G(y) = maxx F (x, y) and through sum in G(y) = x F (x, y) ). Operations between diagrams
will be represented using the usual symbols +, ?, max etc., and the distinction between scalar operations and operations over functions should be clear from context. Importantly, these operations are
carried out symbolically and scale polynomially in the size of the ADD rather than the potentially
exponentially larger tabular representation of the function. ADD operations assume a total ordering O on the variables and impose that ordering in the DAG structure (interior nodes) of any ADD.
SDP uses the compact MDP model to derive compact value functions by iterating symbolic Bellman
backups that avoid enumerating all states. It has the advantage that the value function is exact while
often being much more compact than explicit tables. Early SDP approaches such as SPUDD [1]
only represented the structure in the state variables and enumerate over actions, so that space and
time is at least linearly related to the number of actions, and hence exponential in m.
In recent work, we extended SDP to factored action spaces by computing Bellman backups using an
algorithm called Factored Action Regression (FAR) [4]. This is done by implementing the following
equations using ADD operations over a representation like Figure 1. Let T Q (V ) denote the backup
2
operator that computes the next iterate of the Q-value function starting with value function V ,
X
X
0
0
P Xl ? primed(V )
T Q (V ) = R + ?
P X1 . . .
X10
(1)
Xl0
then T (V ) = maxA1 . . . maxAm T Q (V ) gives the next iterate of the value function. Repeating this
process we get the VI algorithm. Here primed(V ) swaps the state variables X in the diagram V
with next state variables X0 (c.f. DBN representation for next state variables). Equation 1 should be
0
read right to left as follows: each probability diagram P Xi assigns a probability to Xi0 from assign0
0
ments
Pto P arents(Xi ) ? (X, A), introducing the0 variables P arents(Xi ) into the value function.
The
marginalization eliminates the variable Xi . We arrive at the Q-function that maps variable
assignments ? (X, A) to real values. Written in this way, where the domain dynamics are explicitly
expressed in terms of actions variables and where maxA = maxA1 ,...,Am is a symbolic marginalization operation over action variables, we get the Factored Action Regression (FAR) algorithm [4].
In the following, we use T () to denote a Bellman-like backup where superscript T Q () denotes that
that actions are not maximized out so the output is a function of state and actions, and subscript as in
T? () defined below denotes that the update is restricted to the actions in ?. Similarly T?Q () restricts
to a (possibly partial) policy ? and does not maximize over the unspecified action choice.
In this work we will build on Modified Policy Iteration (MPI), which generalizes value iteration and
policy iteration, by interleaving k policy evaluation steps between successive Bellman backups [5].
Here a policy evaluation step corresponds to iterating exact policy backups, denoted by T? where
the action is prescribed by the policy ? in each state. MPI has the potential to speed up convergence
over VI because, at least for flat action spaces, policy evaluation is considerably cheaper than full
Bellman backups. In addition, when k > 0, one might hope for larger jumps in policy improvement
because the greedy action in T is based on a more accurate estimate of the value of the policy.
Interestingly, the first approach to symbolic planning in MDPs was a version of MPI for factored
states called Structured Policy Iteration (SPI), which was [6] later adapted to relational problems
[8]. SPI represents the policy as a decision tree with state-variables labeling interior nodes and a
concrete action as a leaf node. The policy backup uses the graphical form of the policy. In each such
backup, for each leaf node (policy action) a in the policy tree, its Q-function Qa is computed and
attached to the leaf. Although SPI leverages the factored state representation, it represents the policy
in terms of concrete joint actions, which fails to capture the structure among the action variables in
FA-MDPs. In addition, in factored actions spaces this requires an explicit calculation of Q functions
for all joint actions. Finally, the space required for policy backup can be prohibitive because each
Q-function Qa is joined to each leaf of the policy. SPI goes to great lengths in order to enforce a
policy backup which, intuitively, ought to be much easier to compute than a Bellman backup. In
fact, we are not aware of any implementations of this algorithm that scales well for FA-MDPs or
even for factored state spaces. The next section provides an alternative algorithm.
3
Factored Action MPI (FA-MPI)
In this section, we introduce Factored Action MPI (FA-MPI), which uses a novel form of policy
backup. Pseudocode is given in Figure 2. Each iteration of the outer while loop starts with one full
Bellman backup using Equation 1, i.e., policy improvement. The inner loop performs k steps of
policy backups using a new algorithm described below that avoids enumerating all actions.
We represent the policy using a Binary Decision Diagram (BDD) with state and action variables
where a leaf value of 1 denotes any combination of action variables that is the policy action, and a
leaf value of ?? indicates otherwise. Using this representation, we perform policy backups using
T?Q (V ) given in Equation 2 below followed by a max over the actions in the resulting diagram.
In this equation, the diagram resulting from the product ? ? primed(V ) sets the value of all offpolicy state-actions to ??, before computing any value for them1 and this ensures correctness of
the update as indicated by the next proposition.
?
?
X
X
0
0
(2)
T?Q (V ) = ?R + ?
P X1 . . .
P Xl ? (? ? primed(V ))?
X10
1
Xl0
Notice that T?Q is equivalent to ? ? T Q but the former is easier to compute.
3
Algorithm 3.1: FA-MPI/OPI(k)
Algorithm 3.2: P(D, ?)
0
V ? 0, i ? 0
(V0i+1 , ? i+1 ) ? max T Q (V i )
d ? variable at the root node of D
c ? variable at root node of ?
if d occurs after c in ordering
then P(D, max(?T , ?F ))
else if d = c
then ADD(d, P(DT , ?T ), P(DF , ?F ))
else if d occurs before c in ordering
then ADD(d, P(DT , ?), P(DF , ?))
else if ? = ?? return (??)
else D
A
while ||V0i+1 ? V i || >
?
for j ?
? 1 to k
?
?
?
For Algorithm FA-MPI :
?
?
?
?
?
i+1
?
?
? Vji+1 ? max T Qi+1 (Vj?1
)
?
?
?
A
?
? do
?
For Algorithm OPI :
?
?
?
i+1
do
? V i+1 ? max T?Qi+1 (Vj?1
)
j
?
?
?
A
?
i+1
?
i+1
?
V
?
V
?
k
?
?
?
i?i+1
?
?
?(V i+1 , ? i+1 ) ? max T Q (V i )
0
return (? i+1 ).
A
Figure 2: Factored Action MPI and OPI.
Figure 3: Pruning procedure for an ADD. Subscripts T and F denote the true and false child
respectively.
Proposition 1. FA-MPI computes exact policy backups i.e. maxA T?Q = T? .
The proof uses the fact that (s, a) pairs that do not agree with the policy get a value ?? via the
constraints and therefore do not affect the maximum. While FA-MPI can lead to improvements over
VI (i.e. FAR), like SPI, FA-MPI can lead to large space requirements in practice. In this case, the
bottleneck is the ADD product ? ? primed(V ), which can be exponentially larger than primed(V )
in the worst case. The next section shows how to approximate the backup in Equation 2 while
ensuring no growth in the size of the ADD.
4
Opportunistic Policy Iteration (OPI)
Here we describe Opportunistic Policy Iteration (OPI), which addresses the shortcomings of FAMPI. As seen in Figure 2, OPI is identical to FA-MPI except that it uses an alternative, more conservative policy backup. The sequence of policies generated by FA-MPI (and MPI) may not all have
compactly representable ADDs. Fortunately, finding the optimal value function may not require
representing the values of the intermediate policies exactly. The key idea in OPI is to enforce the
policy constraint opportunistically, i.e. only when they do not increase the size of the value function
representation.
In an exponential action space, we can sometimes expect a Bellman backup to be a coarser partitioning of state variables than the value function of a given policy (e.g. two states that have the same
value under the optimal action have different values under the policy action). In this case enforcing the policy constraint via T?Q (V ) is actually harmful in terms of the size of the representation.
OPI is motivated by retaining the coarseness of Bellman backups in some states, and otherwise enforcing the policy constraint. The OPI backup is sensitive to the size of the value ADD so that it is
guaranteed to be smaller than the results of both Bellman backup and policy backup.
First we describe the symbolic implementation of OPI . The trade-off between policy evaluation
and policy improvement is made via a pruning procedure (pseudo-code in Figure 3). This procedure
assigns a value of ?? to only those paths in a value function ADD that violate the policy constraint
?. The interesting case is when the root variable of ? is ordered below the root of D (and thus
does not appear in D) so that the only way to violate the constraint is to violate both true and false
branches. We therefore recurse D with the diagram max{?T , ?F }.
Example 1. The pruning procedure is illustrated in Figure 4. Here the input function D does not
contain the root variable X of the constraint, and the max under X is also shown. The result of
pruning P(D, ?) is no more complex than D, whereas the product D ? ? is more complex.
Clearly, the pruning procedure is not sound for ADDs because there may be paths that violate the
policy, but are not explicitly represented in the input function D. In order to understand the result
of P, let p be a path from a root to a leaf in an ADD. The path p induces a partial assignment to the
4
Figure 4: An example for pruning. D and ? denote the given function and constraint respectively.
The result of pruning is no larger than D, as opposed to multiplication. T (true) and F (false)
branches are denoted by the left and the right child respectively.
variables in the diagram. Let E(p) be the set of all extensions of this partial assignment to complete
assignments to all variables. As established in the following proposition, a path is pruned if none of
its extensions satisfies the constraint.
Proposition 2. Let G = P(D, ?) where leaves in D do not have the value ??. Then for all paths
p in G we have:
1. p leads to ?? in G iff ?y ? E(p), ?(y) = ??.
2. p does not lead to ?? in G iff ?y ? E(p), G(y) = D(y).
3. The size of the ADD G is smaller or equal to the size of D.
The proof (omitted due to space constraints) uses structural induction on D and ?. The novel backup
introduced in OPI interleaves the application of pruning with the summation steps so as to prune the
diagram as early as possible. Let P? (D) be shorthand for P(D, ?). The backup used by OPI, which
is shown in Figure 2 is
?
?
X
X
0
0
T??Q (V ) = P? ?P? (R) + ?P? (
P X1 . . . P? (
P Xl ? primed(V )))))?
(3)
X10
Xl0
Using the properties of P we can show that T??Q (V ) overestimates the true backup of a policy, but
is still bounded by the true value function.
Theorem 1. The policy backup used by OPI is bounded between the full Bellman backup and the
true policy backup, i.e. T? ? maxA T??Q ? T .
Since none of the value functions generated by OPI overestimate the optimal value function, it follows that both OPI and FA-MPI converge to the optimal policy under the same conditions as MPI
[5]. However, the sequence of value functions/policies generated by OPI are in general different
from and potentially more compact than those generated by FA-MPI. The relative compactness of
these policies is empirically investigated in Section 6. The theorem also implies that OPI converges
at least as fast as FA-MPI to the optimal policy, and may converge faster.
In terms of a flat MDP, OPI can be interpreted as sometimes picking a greedy off-policy action while
evaluating a fixed policy, when the value function of the greedy policy is at least as good and more
compact than that of the given policy. Thus, OPI may be viewed as asynchronous policy iteration
([9]). However, unlike traditional asynchronous PI, the policy improvement in OPI is motivated by
the size of the representation, rather than any measure of the magnitude of improvement.
Example 2. Consider the example in Figure 5. Suppose that ? is a policy constraint that says that
the action variable A1 must be true when the state variable X2 is false. The backup T Q (R) does
not involve X2 and therefore pruning does not change the diagram and P? (T Q (R)) = T Q (R). The
max chooses A1 = true in all states, regardless of the value of X2 , a greedy improvement. Note
that the improved policy (always set A1 ) is more compact than ?, and so is its value. In addition,
P? (T Q (R)) is coarser than ? ? T Q (R).
5
Memory-Bounded OPI
Memory is usually a limiting factor for symbolic planning. In [4] we proposed a symbolic memory
bounded (MB) VI algorithm for FA-MDPs, which we refer to below as Memory Bounded Factored
5
(a) A simple policy for an MDP (b) Optimal policy backup in (c) OPI backup.
Note the
with two state variables, X1 and FA-MPI.
smaller size of the value funcX2 , and one action variable A1 .
tion.
Figure 5: An illustration where OPI computes an incorrect but more compact value function that is
is a partial policy improvement. T (true) and F (false) branches are denoted by the left and the right
child respectively.
Action Regression (MBFAR). MBFAR generalizes SPUDD and FAR by flexibly trading off computation time for memory. The key idea is that a backup can be computed over a partially instantiated
action, by fixing the value of an action variable. MBFAR computes what [10] called ?Z-value functions? that are optimal value functions for partially specified actions. But in contrast to their work,
where the set of partial actions are hand-coded by the designer, MBFAR is domain-independent and
depends on the complexity of the value function. In terms of time to convergence, computing these
subsets on the fly may lead to some overhead, but in some cases may lead to a speedup. Memory
Bounded FA-MPI (MB-MPI) is a simple extension that uses MBFAR in place of FAR for the backups in Figure 2. MB-MPI is parametrized by k, the number of policy backups, and M , the maximum
size (in nodes) of a Z-value function. MB-MPI generalizes MPI in that MB-MPI(k,0) is the same as
SPI(k) [6] and MB-MPI(k,?) is FA-MPI(k). Also, MB-MPI(0,0) is SPUDD [1] and MB-MPI(0,?)
is FAR [4]. We can also combine OPI with memory bounded backup. We will call this algorithm
MB-OPI. Since both MB-MPI and OPI address space issues in FA-MPI the question is whether one
dominates the other and whether their combination is useful. This is addressed in the experiments.
6
Experiments
In this section, we experimentally evaluate the algorithms and the contributions of different components in the algorithms.
6.1
Domain descriptions
The following domains were described using the Relational Dynamic Influence Diagram Language
(RDDL) [11]. We ground the relational description to arrive at the MDP similar to Figure 1. In our
experiments the variables in the ADDs are ordered so that parents(Xi0 ) occur above Xi0 and the
Xi0 s are ordered by |parents(Xi0 )|. We heuristically chose to do the expectation over state variables
in the top-down way, and maximization of action variables in the bottom-up way with respect to the
variable ordering.
Inventory Control(IC): This domain consists of n independent shops each being full or empty that
can be filled by a deterministic action. The total number of shops that can be filled in one time step is
restricted. The rate of arrival of a customer is distributed independently and identically for all shops
as Bernoulli(p) with p = 0.05. A customer at an empty shop continues to wait with a reward of -1
until the shop is filled and gives a reward of -0.35.
Pm An instance of IC with n shops and m trucks has
a joint state and action space of size 22n and i=0 ni respectively.
SysAdmin: The ?SysAdmin? domain was part of the IPC 2011 benchmark and was introduced
in earlier work [12]. It consists of a network of n computers connected in a given topology. Each
computer is either running (reward of +1) or failed (reward of 0) so that |S| = 2n , and each computer
has an associated deterministic action of rebooting (with a cost of -0.75) so that |A| = 2n . We
restrict the number of computers that can be rebooted in one time step. Unlike the previous domain,
the exogenous events are not independent of one another. A running computer that is not being
6
2
3
4
5
6
7
1
2
#Parallel actions
3
4
5
6
0
7
1
2
# parallel actions
3
4
5
6
400
VI
OPI(2)
OPI(5)
0.15
0.34
0.46
0.46
4
5
6
7
200
400
VI
OPI(2)
OPI(5)
0
800
Unidirectional ring ? 10 computers
Solution time(mins)
Bidirectional ring ? 10 computers
Solution time(mins)
400
0.63 0.87 0.63
200
600
1
0.3
VI
OPI(2)
OPI(5)
0
Solution time(mins)
1000
SysAdmin ? Star network ? 11 computers
VI
OPI(2)
OPI(5)
200
Solution time(mins)
Inventory Control ? 8 shops
7
1
2
3
# parallel actions
# parallel actions
Figure 6: Impact of policy evaluation: Parallel actions vs. Time. In Star and Unidirectional networks
VI was stopped at a time limit of six hours and the Bellman error is annotated.
200
1
2
3
4
5
6
8
6
4
Bellman error
2
200
FA?MPI(5)
OPI(5)
EML
7
0
EML
FA?MPI(5)
OPI(5)
100
EML
600
EML
EML
Elevator Control ? 4 floors, 2 elevators
EML
50
%Time more than OPI
EML
Bidirectional ring ? 2 parallel actions
0
FA?MPI(5)
OPI(5)
1000
Solution time(mins)
Inventory Control(Uniform) ? 8 shops
8
9
#Parallel actions
10
11
12
0
100
Computers(|S|,|A|)
200
300
400
CPU time(mins)
Figure 7: Impact of Pruning. EML denotes Exceeded Memory Limit and the Bellman error is
denoted in parenthesis.
rebooted is running in the next state
withprobability p proportional to the number of its running
r
neighbors, where p = 0.45 + 0.5 1+n
1+nc , nr is the number of neighboring computers that have
not failed and nc is the number of neighbors. We test this domain on three topologies of increasing
difficulty, viz. a star topology, a unidirectional ring and a bidirectional ring.
Elevator control: We consider the problem of controlling m elevators in a building with n floors.
A state is described as follows: for each floor, whether a person is waiting to go up or down; for
each elevator, whether a person inside the elevator is going up or down, whether the elevator is at
each floor, and its current direction (up or down). A person arrives at a floor f , independently of
other floors, with a probability Bernoulli(pf ), where pf is drawn from U nif orm(0.1, 0.3) for
each floor. Each person gets into an elevator if it is at the same floor and has the same direction (up
or down), and exits at the top or bottom floor based on his direction. Each person gets a reward of
-1 when waiting at a floor and -1.5 if he is in an elevator that is moving in a direction opposite to
his destination. There is no reward if their directions are the same. Each elevator has three actions:
move up or down by one floor, or flip its direction.
6.2
Experimental validation
In order to evaluate scaling with respect to the action space we fix the size of the state-space and
measure time to convergence (Bellman error less than 0.1 with discount factor of 0.9). Experiments
were run on a single core of an Intel Core 2 Quad 2.83GHz with 4GB limit. The charts denote
OPI with k steps of evaluation as OPI (k), and MB-OPI with memory bound M as MB-OPI(k, M )
(similarly FA-MPI(k) and MB-MPI(k, M )). In addition, we compare to symbolic value iteration:
MB?MPI(5)
MB?OPI(5)
0
50
100
150
200
CPU time(mins)
Figure 8: Impact of policy
evaluation in Elevators.
1
2
3
4
5
#Parallel actions
6
7
EML
50
150
OPI(5)
FA?MPI(5)
MB?OPI(5,20k)
MB?MPI(5,20k)
0
% more time than OPI
1000
EML
600
Solution time(mins)
200
6
4
2
EML
0
Bellman Error
8
VI
OPI(2)
OPI(5)
Bidirectional ring ? 2 parallel actions
?100
Inventory Control ? 8 shops
Elevator Control ? 4 floors, 2 elevators
8
9
10
11
12
Computers
Figure 9: Impact of memory bounding. EML denotes Exceeded
Memory Limit.
7
Domain
IC(8)
Star(11)
Biring(10)
Uniring(10)
2
0.06
0.67
0.96
0.99
3
0.03
0.58
0.96
0.99
# parallel actions
Compression in V
4
5
0.03
0.02
0.50
0.40
0.95
0.94
0.99
0.99
6
0.02
0.37
0.88
0.99
7
0.02
0.35
0.80
0.99
2
0.28
1.8e?4
1.1e?3
9.3e?4
3
0.36
2.3e?4
1.3e?3
1e?3
# parallel actions
Compression in ?
4
5
0.35
0.20
?4
2.1e
1.9e?4
1.2e?3
1.1e?3
9.4e?4
8.2e?4
6
0.09
1.4e?4
9.8e?4
5.2e?4
7
0.03
9.6e?5
7.4e?4
2.9e?4
Table 1: Ratio of size of ADD function to a table.
the well-established baseline for factored states, SPUDD [1], and factored states and actions FAMPI(0). Since both are variants of VI we will denote the better of the two as VI in the charts.
Impact of policy evaluation : We compare symbolic VI and OPI in Figure 6. For Inventory Control,
as the number of parallel actions increases, SPUDD takes increasingly more time but FA-MPI(0)
takes increasingly less time, giving VI a bell-shaped profile. An increase in the steps of evaluation
in OPI(2) and OPI(5) leads to a significant speedup. For the SysAdmin domain, we tested three
different topologies. For all the topologies, as the size of the action space increases, VI takes an
increasing amount of time. OPI scales significantly better and does better with more steps of policy
evaluation, suggesting that more lookahead is useful in this domain. In the Elevator Control domain
(Figure 8) OPI(2) is significantly better than VI and OPI(5) is marginally better than OPI(2). Overall,
we see that more evaluation helps, and that OPI is consistently better than VI.
Impact of pruning : We compare PI vs. FA-MPI to assess the impact of pruning. Figure 7 shows
that with increasing state and action spaces FA-MPI exceeds the memory limit (EML) whereas
OPI does not and that when both converge OPI converges much faster. In Inventory Control, FAMPI exceeds the memory limit on five out of the seven instances, whereas OPI converges in all cases.
In SysAdmin, the plot shows the % time FA-MPI takes more than OPI. On the largest problem, FAMPI exceeds the memory-limit, and is at least 150% slower than OPI. In Elevator control, FA-MPI
exceeds the memory limit while OPI does not, and FA-MPI is at least 250% slower.
Impact of memory-bounding : Even though memory bounding can mitigate the memory problem
in FA-MPI, it can cause a large overhead in time, and can still exceed the limit due to intermediate
steps in the exact policy backups. Figure 9 shows the effect of memory bounding. MB-OPI , scales
better than either MB-MPI or OPI . In the IC domain, MB-MPI is much worse than MB-OPI in
time, and MB-MPI exceeds the memory limit in two instances. In the SysAdmin domain, the figure
shows that combined pruning and memory-bounding is better than either one separately. A similar
time profile is seen in the elevators domain (results omitted).
Representation compactness : The main bottleneck toward scalability beyond our current results
is the growth of the value and policy diagrams with problem complexity, which is a function of
the suitability of our ADD representation to the problem at hand. To illustrate this, Table 1 shows
the compression provided by representing the optimal value functions and policies as ADDs versus
tables. We observe orders of magnitude compression for representing policies, which shows that the
ADDs are able to capture the rich structure in policies. The compression ratio for value functions
is less impressive and surprisingly close to 1 for the Uniring domain. This shows that for these
domains ADDs are less effective at capturing the structure of the value function. Possible future
directions include better alternative symbolic representations as well as approximations.
7
Discussion
This paper presented symbolic variants of MPI that scale to large action spaces and generalize and
improve over state-of-the-art algorithms. The insight that the policy can be treated as a loose constraint within value iteration steps gives a new interpretation of MPI. Our algorithm OPI computes
some policy improvements during policy evaluation and is related to Asynchronous Policy Iteration
[9]. Further scalability can be achieved by incorporating approximate value backups (e.g. similar to
APRICODD[2]) as weel as potentially more compact representations(e.g. Affine ADDs [3]). Another avenue for scalability is to use initial state information to focus computation. Previous work
[13] has studied theoretical properties of such approximations of MPI, but no efficient symbolic
version exists. Developing such algorithms is an interesting direction for future work.
Acknowdgements
This work is supported by NSF under grant numbers IIS-0964705 and IIS-0964457.
8
References
[1] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic Planning Using Decision
Diagrams. In Proceedings of the Fifteenth conference on Uncertainty in Artificial Intelligence(UAI), 1999.
[2] Robert St-Aubin, Jesse Hoey, and Craig Boutilier. APRICODD: Approximate Policy Construction Using
Decision Diagrams. Advances in Neural Information Processing Systems(NIPS), 2001.
[3] Scott Sanner, William Uther, and Karina Valdivia Delgado. Approximate Dynamic Programming with
Affine ADDs. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent
Systems, 2010.
[4] Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tadepalli, and Roni Khardon. Planning in Factored
Action Spaces with Symbolic Dynamic Programming. In Twenty-Sixth AAAI Conference on Artificial
Intelligence(AAAI), 2012.
[5] Martin L Puterman and Moon Chirl Shin. Modified Policy Iteration Algorithms for Discounted Markov
Decision Problems. Management Science, 1978.
[6] Craig Boutilier, Richard Dearden, and Moises Goldszmidt. Exploiting Structure in Policy Construction.
In International Joint Conference on Artificial Intelligence(IJCAI), 1995.
[7] R Iris Bahar, Erica A Frohm, Charles M Gaona, Gary D Hachtel, Enrico Macii, Abelardo Pardo, and
Fabio Somenzi. Algebraic Decision Diagrams and their Applications. In Computer-Aided Design, 1993.
[8] Chenggang Wang and Roni Khardon.
arXiv:1206.5287, 2012.
Policy Iteration for Relational MDPs.
arXiv preprint
[9] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. 1996.
[10] Jason Pazis and Ronald Parr. Generalized Value Functions for Large Action Sets. In Proc. of ICML, 2011.
[11] Scott Sanner. Relational Dynamic Influence Diagram Language (RDDL): Language Description. Unpublished ms. Australian National University, 2010.
[12] Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent Planning with Factored MDPs. Advances
in Neural Information Processing Systems(NIPS), 2001.
[13] Bruno Scherrer, Victor Gabillon, Mohammad Ghavamzadeh, and Matthieu Geist. Approximate Modified
Policy Iteration. In ICML, 2012.
9
| 5143 |@word version:4 compression:5 coarseness:1 tadepalli:1 heuristically:1 hu:1 prasad:2 delgado:1 initial:1 interestingly:1 existing:1 current:4 written:1 must:1 gaona:1 john:1 ronald:2 treating:1 plot:1 update:2 v:2 greedy:4 leaf:8 prohibitive:1 intelligence:3 offpolicy:1 core:2 provides:1 node:7 karina:1 successive:1 daphne:1 five:1 c2:1 become:1 predecessor:1 incorrect:1 consists:3 shorthand:1 overhead:2 combine:1 inside:1 introduce:1 x0:1 abelardo:1 planning:6 sdp:6 bellman:20 discounted:1 moises:1 cpu:2 quad:1 pf:2 considering:1 increasing:3 becomes:2 provided:1 bounded:9 what:1 pto:1 unspecified:1 reboot:1 interpreted:1 maxa:3 finding:1 ought:1 guarantee:1 pseudo:1 mitigate:1 growth:3 rebooting:1 exactly:1 control:12 partitioning:1 grant:1 appear:1 bertsekas:1 overestimate:2 before:2 limit:10 subscript:2 path:6 might:1 chose:1 emphasis:1 studied:1 directed:2 atomic:2 practice:2 opportunistic:5 eml:13 frohm:1 procedure:5 shin:1 empirical:2 maxx:1 significantly:4 bell:1 wait:1 orm:1 symbolic:22 get:5 marginalize:1 interior:2 operator:3 close:1 context:1 influence:2 equivalent:1 map:2 deterministic:2 customer:2 jesse:2 go:2 regardless:1 starting:1 flexibly:1 independently:2 focused:1 assigns:2 matthieu:1 factored:29 insight:1 bahar:1 importantly:1 dominate:1 his:2 autonomous:1 limiting:1 controlling:2 suppose:1 construction:2 exact:6 programming:4 us:7 continues:1 coarser:2 observed:1 bottom:2 fly:1 preprint:1 afern:1 capture:2 worst:1 wang:1 ensures:1 connected:1 ordering:5 trade:1 complexity:3 constrains:1 reward:8 dynamic:9 ghavamzadeh:1 exit:1 swap:1 compactly:2 joint:5 represented:4 geist:1 instantiated:1 fast:1 describe:2 shortcoming:1 effective:1 artificial:3 labeling:1 opportunistically:1 larger:4 valued:1 say:1 otherwise:5 superscript:1 advantage:1 sequence:2 product:3 mb:22 neighboring:2 loop:2 iff:2 lookahead:1 description:3 scalability:6 exploiting:1 parent:3 convergence:3 requirement:2 empty:2 ijcai:1 ring:7 converges:3 help:1 derive:1 illustrate:1 fixing:1 school:1 op:2 c:1 implies:1 trading:1 australian:1 direction:8 somenzi:1 annotated:1 stochastic:1 raghavan:1 implementing:1 require:2 fix:1 suitability:1 aubin:2 proposition:4 summation:1 extension:3 lying:1 ground:1 normal:1 ic:4 great:1 parr:2 consecutive:1 early:2 omitted:2 proc:1 sensitive:1 largest:1 correctness:1 hope:1 clearly:1 always:1 modified:5 rather:3 primed:7 avoid:1 sysadmin:7 focus:1 viz:1 improvement:9 consistently:1 bernoulli:2 indicates:1 contrast:2 baseline:1 am:2 compactness:2 koller:1 going:1 issue:2 among:1 overall:1 scherrer:1 denoted:5 retaining:1 constrained:1 art:3 equal:1 aware:1 having:1 shaped:1 identical:1 represents:3 weel:1 icml:2 tabular:1 future:2 richard:1 few:1 simultaneously:1 ve:1 national:1 cheaper:2 elevator:16 william:1 evaluation:17 arrives:1 recurse:1 accurate:1 hachtel:1 partial:5 conduct:1 tree:2 harmful:1 filled:3 theoretical:1 stopped:1 instance:3 earlier:1 boolean:2 assignment:5 maximization:2 cost:1 introducing:1 addressing:1 subset:2 v0i:2 uniform:1 too:1 eec:2 considerably:1 chooses:1 combined:1 person:5 st:2 international:2 destination:1 off:3 picking:1 concrete:2 gabillon:1 na:1 aaai:2 management:1 opposed:1 possibly:1 worse:2 dimitri:1 return:2 suggesting:1 potential:1 star:4 oregon:1 explicitly:3 vi:23 depends:2 later:1 view:1 root:6 tion:1 exogenous:1 jason:1 traffic:1 start:2 carlos:1 parallel:13 unidirectional:3 contribution:5 ass:1 chart:2 ni:1 valdivia:1 moon:1 maximized:1 yield:1 generalize:1 bayesian:1 craig:3 fern:1 none:2 marginally:1 influenced:1 sixth:1 naturally:1 proof:2 associated:1 actually:1 bidirectional:4 exceeded:2 dt:2 improved:2 arranged:1 done:1 though:1 just:1 until:1 nif:1 hand:2 indicated:1 mdp:6 usa:2 building:1 effect:1 contain:1 true:10 former:1 hence:2 read:1 illustrated:1 puterman:1 attractive:1 game:1 during:1 mpi:64 arents:2 iris:1 pazis:1 generalized:1 m:1 complete:1 mohammad:1 performs:2 reasoning:2 novel:3 bdd:1 charles:1 pseudocode:1 empirically:1 ipc:1 attached:1 exponentially:3 xi0:6 he:1 interpretation:1 refer:2 corvallis:1 significant:1 dag:2 dbn:6 pm:1 similarly:2 bruno:1 language:3 moving:1 interleaf:1 impressive:1 etc:1 add:30 recent:2 binary:3 victor:1 seen:2 guestrin:1 fortunately:1 impose:1 floor:12 prune:1 converge:4 maximize:1 ii:2 branch:3 multiple:2 full:4 violate:4 sound:1 x10:3 alan:3 exceeds:5 faster:2 calculation:1 coded:1 a1:5 parenthesis:1 qi:2 ensuring:1 neuro:1 scalable:1 regression:3 impact:8 variant:2 expectation:1 df:2 fifteenth:1 arxiv:2 iteration:20 sometimes:3 represent:3 robotics:1 achieved:1 c1:6 addition:4 whereas:3 separately:1 enrico:1 addressed:1 diagram:17 else:4 eliminates:1 unlike:2 strict:1 call:1 structural:1 joshi:1 leverage:1 intermediate:2 exceed:1 identically:1 iterate:2 marginalization:3 affect:1 spi:6 topology:5 restrict:1 opposite:1 inner:1 idea:2 avenue:1 tradeoff:1 enumerating:2 bottleneck:4 whether:5 motivated:3 six:1 gb:1 algebraic:2 roni:4 cause:2 action:88 enumerate:1 useful:2 iterating:2 clear:1 involve:1 boutilier:3 amount:1 repeating:1 discount:1 induces:1 restricts:1 nsf:1 notice:1 designer:1 waiting:2 key:3 drawn:1 opi:68 graph:1 symbolically:1 sum:1 tadepall:1 run:2 uncertainty:2 arrive:2 planner:1 place:1 decision:8 scaling:1 capturing:1 bound:1 guaranteed:2 followed:1 convergent:1 truck:1 adapted:1 occur:1 constraint:16 handful:1 x2:3 flat:2 encodes:1 pardo:1 speed:1 prescribed:1 min:8 pruned:1 the0:1 martin:1 speedup:2 department:1 structured:1 developing:1 combination:2 representable:1 smaller:3 increasingly:2 making:1 vji:1 intuitively:1 restricted:2 hoey:2 equation:6 agree:1 loose:1 enforcement:1 flip:1 generalizes:3 operation:8 actuator:1 observe:1 enforce:3 medford:1 tuft:2 alternative:4 slower:2 assumes:1 running:9 denotes:5 top:2 include:1 graphical:2 giving:2 build:1 move:1 question:1 occurs:3 strategy:1 fa:35 usual:1 traditional:1 nr:1 fabio:1 parametrized:1 outer:1 seven:1 reason:1 enforcing:2 induction:1 toward:1 assuming:1 length:1 code:1 modeled:1 illustration:1 ratio:2 nc:2 unfortunately:1 robert:2 potentially:4 implementation:3 design:1 policy:96 twenty:1 perform:1 allowing:1 markov:2 benchmark:1 finite:1 immediate:1 extended:2 relational:5 spudd:6 introduced:2 pair:1 required:1 specified:3 c3:2 unpublished:1 orst:1 distinction:1 established:2 hour:1 nip:2 qa:2 address:4 beyond:1 able:1 aswin:2 below:5 usually:1 scott:2 max:10 memory:24 dearden:1 event:1 difficulty:1 treated:1 sanner:2 representing:5 shop:9 improve:2 mdps:13 carried:1 prior:1 multiplication:1 relative:1 multiagent:2 expect:1 interesting:2 proportional:1 acyclic:1 versus:1 validation:1 agent:1 affine:2 uther:1 pi:2 surprisingly:1 supported:1 asynchronous:3 tsitsiklis:1 understand:1 neighbor:3 distributed:1 ghz:1 overcome:1 transition:1 avoids:1 evaluating:1 computes:5 rich:1 made:1 jump:1 far:6 polynomially:1 erica:1 approximate:6 compact:8 pruning:13 status:3 uai:1 xi:5 table:6 operational:1 inventory:6 apricodd:2 investigated:1 complex:2 domain:18 vj:2 main:3 linearly:1 backup:46 bounding:5 arrival:1 profile:2 child:3 x1:5 intel:1 fails:2 explicit:2 khardon:2 exponential:3 xl:4 lie:1 interleaving:1 theorem:2 down:6 symbol:1 ments:1 dominates:1 incorporating:1 exists:1 false:5 magnitude:2 easier:2 failed:4 expressed:1 ordered:3 partially:2 scalar:1 joined:1 applies:1 corresponds:1 truth:1 satisfies:1 gary:1 ma:1 conditional:2 viewed:1 macii:1 change:1 experimentally:1 aided:1 except:1 conservative:1 total:2 called:3 experimental:1 select:1 support:1 goldszmidt:1 xl0:3 evaluate:2 tested:1 rebooted:5 |
4,580 | 5,144 | Point Based Value Iteration with Optimal Belief
Compression for Dec-POMDPs
Charles L. Isbell
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Liam MacDermed
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Abstract
We present four major results towards solving decentralized partially observable
Markov decision problems (DecPOMDPs) culminating in an algorithm that outperforms all existing algorithms on all but one standard infinite-horizon benchmark problems. (1) We give an integer program that solves collaborative Bayesian
games (CBGs). The program is notable because its linear relaxation is very often
integral. (2) We show that a DecPOMDP with bounded belief can be converted
to a POMDP (albeit with actions exponential in the number of beliefs). These actions correspond to strategies of a CBG. (3) We present a method to transform any
DecPOMDP into a DecPOMDP with bounded beliefs (the number of beliefs is a
free parameter) using optimal (not lossless) belief compression. (4) We show that
the combination of these results opens the door for new classes of DecPOMDP algorithms based on previous POMDP algorithms. We choose one such algorithm,
point-based valued iteration, and modify it to produce the first tractable value iteration method for DecPOMDPs that outperforms existing algorithms.
1
Introduction
Decentralized partially observable Markov decision processes (DecPOMDPs) are a popular model
for cooperative multi-agent decision problems; however, they are NEXP-complete to solve [15].
Unlike single agent POMDPs, DecPOMDPs suffer from a doubly-exponential curse of history [16].
Not only do agents have to reason about the observations they see, but also about the possible
observations of other agents. This causes agents to view their world as non-Markovian because
even if an agent returns to the same underlying state of the world, the dynamics of the world may
appear to change due to other agent?s holding different beliefs and taking different actions. Also,
for POMDPs, a sufficient belief space is the set of probability distributions over possible states. In
the case of DecPOMDPs an agent must reason about the beliefs of other agents (who are recursively
reasoning about beliefs as well), leading to nested beliefs that can make it impossible to losslessly
reduce an agent?s knowledge to less than its full observation history.
This lack of a compact belief-space has prevented value-based dynamic programming methods from
being used to solve DecPOMDPs. While value methods have been quite successful at solving
POMDPs, all current DecPOMDP approaches are policy-based methods, where policies are sequentially improved and evaluated at each iteration. Even using policy methods, the curse of history
is still a big problem, and current methods deal with it in a number of different ways. [5] simply
removed beliefs with low probability. Some use heuristics to prune (or never explore) particular
belief regions [19, 17, 12, 11]. Other approaches merge beliefs together (i.e., belief compression)
[5, 6]. This can sometimes be done losslessly [13], but such methods have limited applicability
and still usually result in an exponential explosion of beliefs. There have also been approaches that
attempt to operate directly on the infinitely nested belief structure [4], but these are approximations
1
of unknown accuracy (if we stop at the nth nested belief the nth + 1 could dramatically change the
outcome). All of these approaches have gotten reasonable empirical results in a few limited domains
but ultimately scale and generalize poorly.
Our solution to the curse of history is simple: to assume that it doesn?t exist, or more precisely, that
the number of possible beliefs at any point in time is bounded. While simple, the consequences of
this assumption turn out to be quite powerful: a bounded-belief DecPOMDP can be converted into
an equivalent POMDP. This conversion is accomplished by viewing the problem as a sequence of
cooperative Bayesian Games (CBGs). While this view is well established, our use of it is novel. We
give an efficient method for solving these CBGs and show that any DecPOMDP can be accurately
approximated by a DecPOMDP with bounded beliefs. These results enable us to utilize existing
POMDP algorithms, which we explore by modifying the PERSEUS algorithm [18]. Our resulting
algorithm is the first true value-iteration algorithm for DecPOMDPs (where no policy information
need be retained from iteration to iteration) and outperforms existing algorithms.
2
DecPOMDPs as a sequence of cooperative Bayesian Games
Many current approaches for solving DecPOMDPs view the decision problem faced by agents as
a sequence of CBGs [5]. This view arises from first noting that a complete policy must prescribe
an action for every belief-state of an agent. This can take the form of a strategy (a mapping from
belief to action) for each time-step; therefore at each time-step agents must choose strategies such
that the expectation over their joint-actions and beliefs maximizes the sum of their immediate reward
combined with the utility of their continuation policy. This decision problem is equivalent to a CBG.
(0)
Formally, we define
Qn a DecPOMDP as the tuple hN, A, S, O, P, R, s i where:
Qn N is the set of n
players. A = i=1 Ai is the set of joint-actions. S is the set of states. O = i=1 Oi is the set of
joint-observations. P : S ? A ? ?(S ? O) is the probability transition function with P (s0 , ~o|s, ~a)
being the probability of ending up in state s0 with observations ~o after taking joint-action ~a in state
s. R : S ? A ? R is the shared reward function. And s(0) ? ?(S) is the initial state distribution.
The CBG consists of a common-knowledge distribution over all possible joint-beliefs along with
a reward for each joint-action/belief. Naively, each observation history corresponds to a belief. If
beliefs are more compact (i.e., through belief compression), then multiple histories can correspond
to the same belief. The joint-belief distribution is commonly known because it depends only on
the initial state and player?s policies, which are both commonly known (due to common planning or
rationality). These beliefs are the types of the Bayesian game. We can compute the current commonknowledge distribution (without belief compression) recursively:
The probability that joint-type
P
?t+1 = h~o, ?t i at time t + 1 is given by: ? t+1 (h~o, ?t i) = st+1 P r[st+1 , h~o, ?t i] where:
XX
P r[st+1 , h~o, ?t i] =
P (st+1 , ~o|st , ??t )P r[st , ?t ]
(1)
st
?t
The actions of the Bayesian game are the same
in the DecPOMDP. The rewards to the Bayesian
P asP
game are ideally the immediate reward R = st ?t R(st , ??t )P r[st , ?t ] along with the utility of
the best continuation policy. However, knowing the best utility is tantamount to solving the problem.
Instead, an estimation can be used. Current approaches estimate the value of the DecPOMDP as if it
were an MDP [19], a POMDP [17], or with delayed communication [12]. In each case, the solution
to the Bayesian game is used as a heuristic to guide policy search.
3
An Integer Program for Solving Collaborative Bayesian Games
Many DecPOMDP algorithms use the idea that a DecPOMDP can be viewed as a sequence of
CBGs to divide policy optimization into smaller sub-problems; however, solving CBGs themselves
is NP-complete [15]. Previous approaches have solved Bayesian games by enumerating all strategies, with iterated best response (which is only locally optimal) or branch and bound search [10].
Here we present a novel integer linear program that solves for an optimal pure strategy Bayes-Nash
equilibrium (which always exists for games with common payoffs). While integer programming is
still NP-complete, our formulation has a huge advantage: the linear relaxation is itself a correlated
communication equilibrium [7], empirically very often integral (above 98% for our experiments in
section 7). This allows us to optimally solve our Bayesian games very efficiently.
2
Our integer linear program for Bayesian game hN, A, ?, ?, Ri optimizes over Boolean variables
x~a,? , one for each joint-action for each joint-type. Each variable represents the probability of jointaction ~a being taken if the agent?s types are ?. Constraints must be imposed on these variables to
insure that they form a proper probability distribution and that from each agent?s perspective, its
action is conditionally independent of other agents? types.
These restrictions can be expressed by the following linear constraints (equation (2)):
For each agent i, joint-type ?, and partial joint-actions of other agents ~a?i
X
x~a,? = xh~a?i ,??i i
(2)
ai ?Ai
for each ? ? ?:
X
x~a,? = 1 and for each ? ? ?, ~a ? A: x~a,? ? 0
~
a?A
In order to make the description of the conditional independence constraints more concise, we use
the additional variables xh~a?i ,??i i . These can be immediately substituted out. These represent the
posterior probability that agent i, after becoming type ?i , thinks other agents will take actions ~a?i
when having types ??i . These constraints enforce that an agent?s posterior probabilities are unaffected by other agent?s observations. Any feasible assignment of variables x~a,? represents a valid
agent-normal-form correlated equilibria (ANFCE) strategy for the agents, and any integral solution
is a valid pure strategy BNE. In order to find the optimal solution for a game with distribution over
n
types
P ? ? ?(?) and rewards R : ? ? A ? R we can solve the integer program: Maximize
?? R(?, ~a)x~a,? over variables x~a,? ? {0, 1} subject to constraints (2).
An ANFCE generalizes Bayes-Nash equilibria: a pure strategy ANFCE is a BNE. We can view
ANFCEs as having a mediator that each agent tells its type to and receives an action recommendation
from. An ANFCE is then a probability distribution across joint type/actions such that agents do not
want to lie to the mediator nor deviate from the mediator?s recommendation. More importantly, they
cannot deduce any information about other agent?s types from the mediator?s recommendation. We
cannot use an ANFCE directly, because it requires communication; however, a deterministic (i.e.,
integral) ANFCE requires no communication and is a BNE.
4
Bounded Belief DecPOMDPs
Here we show that we can convert a bounded belief DecPOMDP (BB-DecPOMDP) into an equivalent POMDP (that we call the belief-POMDP). A BB-DecPOMDP is a DecPOMDP where each
agent i has a fixed upper bound |?i | for the number of beliefs at each time-step. The beliefPOMDP?s states are factored, containing each agent?s belief along with the DecPOMDP?s state.
The POMDP?s actions are joint-strategies. Recently, Dibangoye et al. [3] showed that a finite horizon DecPOMDP can be converted into a finite horizon POMDP where a probability distribution
over histories is a sufficient statistic that can be used as the POMDP?s state. We extend this result to
infinite horizon problems when beliefs are bounded (note that a finite horizon problem always has
bounded belief). The main insight here is that we do not have to remember histories, only a distribution over belief-labels (without any a priori connection to the belief itself) as a sufficient statistic.
As such the same POMDP states can be used for all time-steps, enabling infinite horizon problems
to be solved.
In order to create the belief-POMDP we first transform observations so that they correspond one-toone with beliefs for each agent. This can be achieved naively by folding the previous belief into the
new observation so that each agent receives a [previous-belief, observation] pair; however, because
an agent has at most |?i | beliefs we can partition these histories into at most |?i | informationequivalent groups. Each group corresponds to a distinct belief and instead of the [previous-belief,
observation] pair we only need to provide the new belief?s label.
Second, we factor our state to include each agent?s observation (now a belief-label) along with
the original underlying state. This transformation increases the state space polynomially. Third,
recall that a belief is the sum of information that an agent uses to make decisions. If agents know
each other?s policies (e.g., by constructing a distributed policy together) then our modified state
(which includes beliefs for each agent) fully determines the dynamics of the system. States now
3
appear Markovian again. Therefore, a probability distribution across states is once again a sufficient
plan-time statistic (as proven in [3] and [9]). This distribution exactly corresponds to the Bayesian
prior (after receiving the belief-observation) of the common knowledge distribution of the current
Bayesian game being played as given by equation (1).
Finally, its important to note that beliefs do not directly affect rewards or transitions. They therefore
have no meaning beyond the prior distribution they induce. We can therefore freely relabel and
reorder beliefs without changing the decision problem. This allows belief-observations in one timestep to use the same observation labels in the next time-step, even if the beliefs are different (in which
case the distribution will be different). We can use this fact to fold our mapping from histories to
belief-labels into the belief-POMDP?s transition function.
We now formally define the belief-POMDP hA0 , S 0 , O0 , P 0 , R0 , s0(0) i converted from BBDecPOMDP hN, A, S, O, P, R, s(0) i (with belief labels ?i for each agent). The belief-POMDP
has factored states h?, ?1 , ? ? ? , ?n i ? S 0 where ? ? S is the underlying state and ?i ? ?i is
Qn Q|?i |
agent i?s belief. O0 = {} (no observations). A0 = i=1 j=1
Ai is the set of actions (one acP
0
tion for each agent for each belief). P 0 (s0 |s, a), =
P
[?,o]=? 0 (? , o|?, ha?1 , ? ? ? , a?n i) (a sum
over equivalent joint-beliefs) where a?i is the action agent i would take if holding belief ?i .
0(0)
R0 (s, a) = R(?, ha?1 , ? ? ? , a?n i) and s? = s(0) is the initial state distribution with each agent
having the same belief.
Actions in this belief-POMDP are pure strategies for each agent specifying what each agent should
do for every belief they might have. In other words
Q it is a mapping from observation to action
(A0 = {? ? A}). The action space thus has size i |Ai ||?i | which is exponentially more actions
than the number of joint-actions in the BB-DecPOMDP. Both the transition and reward functions
use the modified joint-action ha?1 , ? ? ? , a?n i which is the action that would be taken once agents see
their beliefs and follow action a ? A. This makes the single agent in the belief-POMDP act like
a centralized mediator playing the sequence of Bayesian games induced by the BB-DecPOMDP.
At every time-step this centralized mediator must give a strategy to each agent (a solution to the
current Bayesian game). The mediator only knows what is commonly known and thus receives no
observations.
This belief-POMDP is decision equivalent to the BB-DecPOMDP. The two models induce the same
sequence of CBGs; therefore, there is a natural one-to-one mapping between polices of the two
models that yield identical utilities. We show this constructively by providing the mapping:
Lemma 4.1. Given BB-DecPOMDP hN, A, O, ?, S, P, R, s(0) i with policy ? : ?(S)?O ? A and
belief-POMDP hA0 , S 0 , O0 , P 0 , R0 , s0(0) i as defined above, with policy ? 0 : ?(S 0 ) ? {O0 ? A0 }
then if ?(s(t) )i,o = ?i0 (s(t) , o), then V? (s(0) ) = V?0 0 (s0(0) ) (the expected utility of the two policies
are equal).
We have shown that BB-DecPOMDPs can be turned into POMDPs but this does not mean that we
can easily solve these POMDPs using existing methods. The action-space of the belief-POMDP
is exponential with respect to the number of observations of the BB-DecPOMDP. Most existing
POMDP algorithms assume that actions can be enumerated efficiently, which isn?t possible beyond
the simplest belief-POMDP. One notable exception is an extension to PERSEUS that randomly
samples actions [18]. This approach works for some domains; however, often for decentralized
problems only one particular strategy proves effective, making a randomized approach less useful.
Luckily, the optimal strategy is the solution to a CBG, which we have already shown how to solve.
We can then use existing POMDP algorithms and replace action maximization with an integer linear
program. We show how to do this for PERSEUS below. First, we make this result more useful by
giving a method to convert a DecPOMDP into a BB-DecPOMDP.
5
Optimal belief compression
We present here a novel and optimal belief compression method that transforms any DecPOMDP
into a BB-DecPOMDP. The idea is to let agents themselves decide how they want to merge their
beliefs and to add this decision directly into the problem?s structure. This pushes the onus of belief
compression onto the BB-DecPOMDP solver instead of an explicit approximation method. We give
agents the ability to optimally compress their own beliefs by interleaving each normal time-step
4
(where we fully expand each belief) with a compression time-step (where the agents must explicitly
decide how to best merge beliefs). We call these phases belief expansion and belief compression,
respectively.
The first phase acts like the original DecPOMDP without any belief compression: the observation
given to each agent is its previous belief along with the DecPOMDP?s observation. No information
is lost during this phase; each observation for each agent-type (agents holding the same belief are
the same type) results in a distinct belief. This belief expansion occurs with the same transitions and
rewards as the original DecPOMDP.
The dynamics of the second phase are unrelated to the DecPOMDP. Instead, an agent?s actions are
decisions about how to compress its belief. In this phase, each agent-type must choose its next
belief but they only have a fixed number of beliefs to choose from (the number of beliefs ti is a free
parameter). All agent-types that choose the same belief will be unable to distinguish themselves in
the next time-step; the belief label in the next time-step will equal the action index they take in the
belief compression phase. All rewards are zero. This second phase can be seen as a purely mental
phase and does not affect the environment beyond changes to beliefs although as a technical matter,
we convert our discount factor to its square root to account for these new interleaved states.
Given a DecPOMDP hN, A, S, O, P, R, s(0) i (with states ? ? S and observations ? ? O) we
formally define the BB-DecPOMDP approximation model hN 0 , A0 , O0 , S 0 , P 0 , R0 , s0(0) i with belief
set size parameters t1 , ? ? ? , tn as:
? N 0 = N and A0i = {a1 , ? ? ? , amax(|Ai |, ti ) }
?{1, 2, ? ? ? , ti} with factored observation oi = h?i, ?ii ? O0
? Oi0 = {Oi , ?}
? S0 = S
0
factored state s = h?, ?1 , ?1 , ? ? ? , ?n , ?n i ? S 0 .
?O0 with
(
0
? P (s |s, a) =
P (? 0 , h?10 , ? ? ? , ?n0 i |?, a)
1
0
if ?i : ?i = ?, ?i0 6= ? and ?i0 = ?i
if ?i : ?i 6= ?, ?i0 = ? and ?i0 = ai , ? 0 = ?
otherwise
R(?, a)
if ?i : ?i = ?
? R (s, a) =
0
otherwise
? s0(0) = s(0) , ?1 , 1, ? ? ? , ?n , 1 is the initial state distribution
0
We have constructed the BB-DecPOMDP such that at each time-step agents receive two observations: an observation factor ?, and their belief factor ? (i.e., type). The observation factor is ? at
the expansion phase and the most recent observation, as given by the DecPOMDP, when starting the
compression phase. The observation factor therefore distinguishes which phase the model is currently in. Agents should either all have ? = ? or none of them. The probability of transitioning to a
state where some agents have the empty set observation while others don?t is always zero. Note that
transitions during the contraction phase are deterministic (probability one) and the underlying state
? does not change. The action set sizes may be different in the two phases, however we can easily
get around this problem by mapping any action outside of the designated actions to an equivalent
one inside the designated action. The new BB-DecPOMDP?s state size is |S 0 | = |S|(|O| + 1)n tn .
6
Point-based value iteration for BB-DecPOMDPs
We have now shown that a DecPOMDP can be approximated by a BB-DecPOMDP (using optimal
belief-compression) and that this BB-DecPOMDP can be converted into a belief-POMDP where
selecting an optimal action is equivalent to solving a collaborative Bayesian game (CBG). We have
given a (relatively) efficient integer linear program for solving these CBG. The combination of
these three results opens the door for new classes of DecPOMDP algorithms based on previous
POMDP algorithms. The only difference between existing POMDP algorithms and one tailored for
BB-DecPOMDPs is that instead of maximizing over actions (which are exponential in the beliefPOMDP), we must solve a stage-game CBG equivalent to the stage decision problem.
Here, we develop an algorithm for DecPOMDPs based on the PERSEUS algorithm [18] for
POMDPs, a specific version of point-based value iteration (PBVI) [16]. Our value function representation is a standard convex and piecewise linear value-vector representation over the belief
5
simplex. This is the same representation that PERSEUS and most other value based POMDP algorithms use. It consists of a set of hyperplanes ? = {?1 , ?2 , ? ? ? , ?m } where ?i ? R|S| . These
hyperplanes each represent the value of a particular policy across beliefs. The value function is then
the maximum over all hyperplanes. For a belief b ? R|S| its value as given by the value function ?
is V? (b) = max??? ? ? b. Such a representation acts as both a value function and an implicit policy.
While each ? vector corresponds to the value achieved by following an unspecified policy, we can
reconstruct that policy by computing the best one-step strategy, computing the successor state, and
repeating the process.
The high-level outline of our point-based algorithm is the same as PERSEUS. First, we sample
common-knowledge beliefs and collect them into a belief set B. This is done by taking random
actions from a given starting belief and recording the resulting belief states. We then start with a
poor approximation of the value function and improve it over successive iterations by performing
a one-step backup for each belief b ? B. Each backup produces a policy which yields a valuevector to improve the value function. PERSEUS improves standard PBVI during each iteration by
skipping beliefs already improved by another backup. This reduces the number of backups needed.
In order to operate on belief-POMDPs, we replace PERSEUS? backup operation with one that uses
our integer program.
In order to backup a particular belief point we must maximize the utility of a strategy x. The
utility is computed using the immediate reward combined with our value-function?s current estimate
of a chosen continuation
policy that has value vector ?. Thus, a resulting belief b0 will achieve
P 0
estimated value s b (s)?(s). The resulting belief b0 after taking action ~a from belief b is b0 (s0 ) =
P
0
a). Putting these together, along with the probabilities x~a,s of taking action ~a in
s b(s)P (s |s, ~
state s we get the value of a strategy x from belief b followed by continuation utility ?:
V~a,? (b) =
X
b(s)
s?S
X X
P (s0 |s, ~a)?(s0 )x~a,s
(3)
~
a?A s0 ?S
This is the quantity that we wish to maximize, and can combine with constraints (2) to form an
integer linear program that returns the best action for each agent for each observation (strategy)
given a continuation policy ?. To find the best strategy/continuation policy pair, we can perform this
search over all continuation vectors in ?:
equation 3
Maximize:
|S||A|
Over:
x ? {0, 1}
, ???
Subject to:
inequalities 2
(4)
Each integer program has |S||A| variables but for underlying state factors (nature?s type) there
are only |?||A| linearly independent constraints - one for each unobserved factor and joint-action.
Therefore the number of unobserved states does not increase the number of free variables. Taking
this into account, the number of free variables is O(|O||A|). The optimization problem given above
requires searching over all ? ? ? to find the best continuation value-vector. We could solve this
problem as one large linear program with a different set of variables for each ?, however each set of
variables would be independent and thus can be solved faster as separate individual problems.
We initialize our starting value function ?0 to have a single low conservative value vector (such as
{Rmin /?)}n ). Every iteration then attempts to improve the value function at each belief in our belief
set B. A random common-knowledge belief b ? B is selected and we compute an improved policy
for that belief by performing a one step backup. This backup involves finding the best immediate
strategy-profile (an action for each observation of each agent) at belief b along with the best continuation policy from ?t . We then compute the value of the resulting strategy + continuation policy
(which is itself a policy) and insert this new ?-vector into ?t+1 . Any belief that is improved by ?
(including b) is removed from B. We then select a new common-knowledge belief and iterate until
every belief in B has been improved. We give this as algorithm 1.
This algorithm will iteratively improve the value function at all beliefs. The algorithm stops when
the value function improves less than the stopping criterion ? . Therefore, at every iteration at least
one of the beliefs must improve by at least ? . Because the value function at every belief is bounded
6
Algorithm 1 The modified point-based value iteration for DecPOMDPs
Inputs: DecPOMDP M , discount ?, belief bounds |?i |, stopping criterion ?
Output: value function ?
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
hN, A, O, S, P, R, s(0) i ? BB-DecPOMDP approximation of M as described in section 5
B ? ? sampling of states using a random walk from s(0)
?0 ? {hRmin /?, ? ? ? , Rmin /?i}
repeat
B ? B ? ; ? ? ?0 ; ?0 ? ?
while B 6= ? do
b ? Rand(b ? B)
? ? ?(b)
?0 ? optimal point of integer program (4)
if ?0 (b) > ?(b) then
0
? ? ?S
0
0
? ?? ?
for all b ? B do
if ?(b) > ?(b) then
B ? B/b
until ?0 ? ? < ?
return ?
above by Rmax /? we can guarantee that the algorithm will take fewer than (|B|Rmax )/(? ? ? )
iterations.
Algorithm 1 returns a value function. Ultimately we want a policy. Using the value function a policy
can be constructed in a greedy manner for each player. This is accomplished using a very similar
procedure to how we construct a policy greedily in the fully observable case. Every time-step the
actors in the world can dynamically compute their next action without needing to plan their entire
policy.
7
Experiments
We tested our algorithm on six well known benchmark problems [1, 14]: DecTiger, Broadcast, Gridsmall, Cooperative Box Pushing, Recycling Robots, and Wireless Network. On all of these problems
we met or exceeded the current best solution. This is particularly impressive considering that some
of the algorithms were designed to take advantage of specific problem structure, while our algorithm
is general. We also attempted to solve the Mars Rovers problem, except its belief-POMDP transition
model was too large for our 8GB memory limit.
We implemented our PBVI for BB-DecPOMDP algorithm in Java using the GNU Linear Programming Kit to solve our integer programs. We ran the algorithm on all six benchmark problems using
the dynamic belief compression approximation scheme to convert each of the DecPOMDP problems
into BB-DecPOMDPs. For each problem we converted them into a BB-DecPOMDP with one, two,
three, four, and five dynamic beliefs (the value of ti ).
We used the following fixed parameters while running the PBVI algorithm: ? = 0.0005. We
sampled 3,000 belief points to a maximum depth of 36. All of the problems,
except Wireless,
?
were solved using a discount factor ? of 0.9 for the original problem and 0.9 for our dynamic
approximation (recall that an agent visits two states for every one of the original problem). Wireless
has a discount factor of 0.99. To compensate for this low discount factor in this domain, we sampled
30,000 beliefs to a depth of 360. Our empirical evaluations were run on a six-core 3.20GHz Phenom
processor with 8GB of memory. We terminated the algorithm and used the current value if it ran
longer than a day (only Box Pushing and Wireless took longer than five hours). The final value
reported is the value of the computed decentralized policy on the original DecPOMDP run to a
horizon which pushed the utility error below the reported precision.
Our algorithms performed very well on all benchmark problems (table 7). Surprisingly, most of
the benchmark problems only require two approximating beliefs in order to beat the previously best
7
Dec-Tiger
Broadcast
Recycling
Grid small
Box Pushing
Wireless
|S|
2
4
4
16
100
64
|Ai |
2
2
3
5
4
2
Dec-Tiger
Broadcast
Recycling
Grid small
Box Pushing
|Oi |
2
2
2
2
5
6
Previous Best
Utility
13.4486 [14]
9.1 [1]
31.92865 [1]
6.89 [14]
149.854 [2]
-175.40 [8]
3-Beliefs
Utility
|?|
13.4486 231
9.2710
75
31.9291
37
6.9826
276
224.1387 305
1-Belief
Utility
|?|
-20.000
2
9.2710
36
26.3158
8
5.2716
168
127.1572 258
-208.0437 99
4-Beliefs
Utility
|?|
13.4486 801
9.2710
33
31.9291 498
6.9896 358
-
2-Beliefs
Utility
|?|
4.6161
187
9.2710
44
31.9291
13
6.8423
206
223.8674 357
-167.1025 374
5-Beliefs
Utility
|?|
13.4486 809
9.2710 123
31.9291 850
6.9958 693
-
Table 1: Utility achieved by our PBVI-BB-DecPOMDP algorithm compared to the previously best
known policies on a series of standard benchmarks. Higher is better. Our algorithm beats all previous
results except on Dec-Tiger where we believe an optimal policy has already been found.
known solution. Only Dec-Tiger needs three beliefs. None of the problems benefited substantially
from using four or five beliefs. Only grid-small continued to improve slightly when given more
beliefs. This lack of improvement with extra beliefs is strong evidence that our BB-DecPOMDP
approximation is quite powerful and that the policies found are near optimal. It also suggests that
these problems do not have terribly complicated optimal policies and new benchmark problems
should be proposed that require a richer belief set.
The belief-POMDP state-space size is the primary bottleneck of our algorithm. Recall that this statespace is factored causing its size to be O(|S||O|n |t|n ). This number can easily become intractably
large for problems with a moderate number of states and observations, such as the Mars Rovers
problem. Taking advantage of sparsity can mitigate this problem (our implementation uses sparse
vectors), however value-vectors tend to be dense and thus sparsity is only a partial solution. A
large state-space also requires a greater number of belief samples to adequately cover and represent
the value-function; with more states it becomes increasingly likely that a random walk will fail to
traverse a desirable region of the state-space. This problem is not nearly as bad as it would be for
a normal POMDP because much of the belief-space is unreachable and a belief-POMDP?s value
function has a great deal of symmetry due to the label invariance of beliefs (a relabeling of beliefs
will still have the same utility).
8
Conclusion
This paper presented three relatively independent contributions towards solving DecPOMDPs. First,
we introduce an efficient integer program for solving collaborative Bayesian games. Other approaches require solving CBGs as a sub-problem, and this could directly improve those algorithms.
Second, we showed how a DecPOMDP with bounded belief can be converted into a POMDP. Almost all methods bound the beliefs in some way (through belief compression or finite horizons),
and viewing these problems as POMDPs with large action spaces could precipitate new approaches.
Third, we showed how to achieve optimal belief compression by allowing agents themselves to decide how best to merge beliefs. This allows any DecPOMDP to be converted into a BB-DecPOMDP.
Finally, these independent contributions can be combined together to permit existing POMDP algorithm (here we choose PERSEUS) to be used to solve DecPOMDPs. We showed that this approach
is a significant improvement over existing infinite-horizon algorithms. We believe this opens the
door towards a large and fruitful line of research into modifying and adapting existing value-based
POMDP algorithms towards the specific difficulties of belief-POMDPs.
8
References
[1] C. Amato, D. S. Bernstein, and S. Zilberstein. Optimizing memory-bounded controllers for
decentralized POMDPs. In Proceedings of the Twenty-Third Conference on Uncertainty in
Artificial Intelligence, pages 1?8, Vancouver, British Columbia, 2007.
[2] C. Amato and S. Zilberstein. Achieving goals in decentralized pomdps. In Proceedings of The
8th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages
593?600, 2009.
[3] J. S. Dibangoye, C. Amato, O. Buffet, F. Charpillet, S. Nicol, T. Iwamura, O. Buffet, I. Chades,
M. Tagorti, B. Scherrer, et al. Optimally solving dec-pomdps as continuous-state mdps. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
[4] P. Doshi and P. Gmytrasiewicz. Monte Carlo sampling methods for approximating interactive
POMDPs. Journal of Artificial Intelligence Research, 34(1):297?337, 2009.
[5] R. Emery-Montemerlo, G. Gordon, J. Schneider, and S. Thrun. Approximate solutions for
partially observable stochastic games with common payoffs. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages
136?143. IEEE Computer Society, 2004.
[6] R. Emery-Montemerlo, G. Gordon, J. Schneider, and S. Thrun. Game theoretic control for
robot teams. In Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE
International Conference on, pages 1163?1169. IEEE, 2005.
[7] F. Forges. Correlated equilibrium in games with incomplete information revisited. Theory and
decision, 61(4):329?344, 2006.
[8] A. Kumar and S. Zilberstein. Anytime planning for decentralized pomdps using expectation
maximization. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial
Intelligence, 2012.
[9] F. A. Oliehoek. Sufficient plan-time statistics for decentralized pomdps. In Proceedings of the
Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
[10] F. A. Oliehoek, M. T. Spaan, J. S. Dibangoye, and C. Amato. Heuristic search for identical
payoff bayesian games. In Proceedings of the 9th International Conference on Autonomous
Agents and Multiagent Systems, 2010.
[11] F. A. Oliehoek, M. T. Spaan, and N. Vlassis. Optimal and approximate q-value functions for
decentralized pomdps. Journal of Artificial Intelligence Research, 32(1):289?353, 2008.
[12] F. A. Oliehoek and N. Vlassis. Q-value functions for decentralized pomdps. In Proceedings
of the 6th international joint conference on Autonomous agents and multiagent systems, page
220. ACM, 2007.
[13] F. A. Oliehoek, S. Whiteson, and M. T. Spaan. Lossless clustering of histories in decentralized
pomdps. In Proceedings of The 8th International Conference on Autonomous Agents and
Multiagent Systems, pages 577?584, 2009.
[14] J. Pajarinen and J. Peltonen. Periodic finite state controllers for efficient pomdp and dec-pomdp
planning. In Proc. of the 25th Annual Conf. on Neural Information Processing Systems, 2011.
[15] C. H. Papadimitriou and J. Tsitsiklis. On the complexity of designing distributed protocols.
Information and Control, 53(3):211?218, 1982.
[16] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: an anytime algorithm for
pomdps. In Proceedings of the 18th International Joint Conference on Artificial Intelligence,
IJCAI?03, pages 1025?1030, 2003.
[17] M. Roth, R. Simmons, and M. Veloso. Reasoning about joint beliefs for execution-time
communication decisions. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 786?793. ACM, 2005.
[18] M. T. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for pomdps.
Journal of artificial intelligence research, 24(1):195?220, 2005.
[19] D. Szer, F. Charpillet, S. Zilberstein, et al. Maa*: A heuristic search algorithm for solving
decentralized pomdps. In 21st Conference on Uncertainty in Artificial Intelligence-UAI?2005,
2005.
9
| 5144 |@word version:1 compression:17 open:3 contraction:1 concise:1 recursively:2 initial:4 series:1 selecting:1 outperforms:3 existing:11 current:10 skipping:1 must:10 partition:1 designed:1 n0:1 greedy:1 selected:1 fewer:1 intelligence:9 core:1 mental:1 revisited:1 successive:1 hyperplanes:3 traverse:1 five:3 along:7 constructed:2 become:1 consists:2 doubly:1 combine:1 inside:1 introduce:1 manner:1 expected:1 themselves:4 planning:3 nor:1 multi:1 tagorti:1 curse:3 solver:1 considering:1 becomes:1 xx:1 bounded:12 underlying:5 maximizes:1 insure:1 unrelated:1 what:2 unspecified:1 rmax:2 substantially:1 perseus:10 unobserved:2 transformation:1 finding:1 guarantee:1 remember:1 every:9 mitigate:1 act:3 ti:4 interactive:1 exactly:1 control:2 appear:2 t1:1 modify:1 limit:1 consequence:1 becoming:1 merge:4 might:1 dynamically:1 specifying:1 collect:1 suggests:1 limited:2 liam:2 lost:1 procedure:1 empirical:2 java:1 adapting:1 word:1 induce:2 get:2 cannot:2 ga:2 onto:1 impossible:1 restriction:1 equivalent:8 imposed:1 deterministic:2 fruitful:1 maximizing:1 iwamura:1 roth:1 starting:3 convex:1 pomdp:36 immediately:1 pure:4 factored:5 insight:1 continued:1 importantly:1 amax:1 searching:1 autonomous:6 simmons:1 rationality:1 programming:3 us:3 prescribe:1 designing:1 approximated:2 particularly:1 cooperative:4 solved:4 oliehoek:5 region:2 removed:2 ran:2 environment:1 nash:2 complexity:1 reward:11 ideally:1 dynamic:7 ultimately:2 solving:14 ha0:2 purely:1 rover:2 easily:3 joint:25 distinct:2 effective:1 monte:1 artificial:9 tell:1 outcome:1 outside:1 quite:3 heuristic:4 richer:1 valued:1 solve:11 otherwise:2 reconstruct:1 ability:1 statistic:4 think:1 transform:2 itself:3 final:1 sequence:6 advantage:3 took:1 decpomdps:18 causing:1 turned:1 pajarinen:1 pbvi:5 poorly:1 achieve:2 description:1 ijcai:1 empty:1 produce:2 emery:2 develop:1 b0:3 strong:1 solves:2 implemented:1 involves:1 culminating:1 met:1 gotten:1 modifying:2 stochastic:1 luckily:1 terribly:1 viewing:2 enable:1 dibangoye:3 successor:1 require:3 enumerated:1 extension:1 insert:1 around:1 normal:3 great:1 equilibrium:5 mapping:6 acp:1 major:1 estimation:1 proc:1 label:8 currently:1 create:1 always:3 modified:3 asp:1 gatech:2 zilberstein:4 amato:4 improvement:2 greedily:1 stopping:2 i0:5 entire:1 gmytrasiewicz:1 a0:4 mediator:7 expand:1 unreachable:1 scherrer:1 priori:1 plan:3 initialize:1 equal:2 construct:1 once:2 never:1 having:3 sampling:2 identical:2 represents:2 nearly:1 simplex:1 np:2 others:1 piecewise:1 gordon:3 few:1 distinguishes:1 papadimitriou:1 randomly:1 individual:1 delayed:1 relabeling:1 phase:13 attempt:2 atlanta:2 huge:1 centralized:2 evaluation:1 a0i:1 integral:4 tuple:1 explosion:1 partial:2 incomplete:1 divide:1 walk:2 toone:1 boolean:1 markovian:2 cover:1 assignment:1 maximization:2 applicability:1 successful:1 too:1 optimally:3 reported:2 periodic:1 combined:3 st:11 international:10 randomized:2 receiving:1 together:4 again:2 containing:1 choose:6 hn:7 broadcast:3 conf:1 leading:1 return:4 account:2 converted:8 includes:1 automation:1 matter:1 notable:2 explicitly:1 depends:1 tion:1 view:5 root:1 performed:1 start:1 bayes:2 complicated:1 collaborative:4 contribution:2 oi:4 square:1 accuracy:1 who:1 efficiently:2 correspond:3 yield:2 generalize:1 bayesian:18 iterated:1 accurately:1 none:2 carlo:1 pomdps:22 cc:2 unaffected:1 processor:1 history:11 sixth:1 doshi:1 charpillet:2 stop:2 sampled:2 popular:1 recall:3 knowledge:6 anytime:2 improves:2 exceeded:1 higher:1 day:1 follow:1 response:1 improved:5 rand:1 formulation:1 evaluated:1 done:2 box:4 mar:2 precipitate:1 stage:2 implicit:1 until:2 receives:3 lack:2 pineau:1 mdp:1 believe:2 true:1 adequately:1 iteratively:1 deal:2 conditionally:1 game:23 during:3 criterion:2 outline:1 complete:4 theoretic:1 tn:2 reasoning:2 meaning:1 novel:3 recently:1 charles:1 common:8 empirically:1 exponentially:1 volume:2 extend:1 significant:1 ai:8 grid:3 nexp:1 robot:2 actor:1 impressive:1 longer:2 deduce:1 add:1 posterior:2 own:1 showed:4 recent:1 perspective:1 optimizing:1 optimizes:1 moderate:1 inequality:1 accomplished:2 seen:1 additional:1 greater:1 kit:1 schneider:2 prune:1 freely:1 r0:4 maximize:4 ii:1 branch:1 full:1 multiple:1 needing:1 reduces:1 desirable:1 technical:1 faster:1 veloso:1 compensate:1 prevented:1 visit:1 a1:1 relabel:1 controller:2 expectation:2 iteration:17 sometimes:1 represent:3 tailored:1 achieved:3 dec:7 folding:1 receive:1 robotics:1 want:3 extra:1 operate:2 unlike:1 subject:2 induced:1 recording:1 tend:1 cbg:7 integer:14 call:2 near:1 noting:1 door:3 bernstein:1 iterate:1 independence:1 affect:2 reduce:1 idea:2 knowing:1 enumerating:1 bottleneck:1 six:3 o0:7 utility:17 gb:2 suffer:1 cause:1 action:47 dramatically:1 useful:2 transforms:1 repeating:1 discount:5 locally:1 simplest:1 continuation:10 exist:1 estimated:1 group:2 putting:1 four:3 achieving:1 changing:1 utilize:1 timestep:1 relaxation:2 sum:3 convert:4 run:2 powerful:2 uncertainty:3 fourth:1 almost:1 reasonable:1 decide:3 decision:13 pushed:1 interleaved:1 bound:4 gnu:1 followed:1 played:1 distinguish:1 fold:1 annual:1 precisely:1 constraint:7 isbell:2 rmin:2 ri:1 kumar:1 performing:2 relatively:2 designated:2 combination:2 poor:1 smaller:1 across:3 slightly:1 increasingly:1 spaan:4 making:1 taken:2 equation:3 previously:2 turn:1 fail:1 needed:1 know:2 tractable:1 generalizes:1 operation:1 decentralized:12 permit:1 forge:1 enforce:1 buffet:2 original:6 compress:2 running:1 include:1 clustering:1 recycling:3 pushing:4 giving:1 prof:1 approximating:2 society:1 icra:1 already:3 quantity:1 occurs:1 strategy:20 primary:1 losslessly:2 unable:1 separate:1 thrun:3 reason:2 retained:1 index:1 providing:1 holding:3 constructively:1 implementation:1 proper:1 policy:36 unknown:1 perform:1 allowing:1 conversion:1 upper:1 observation:33 twenty:4 markov:2 benchmark:7 finite:5 enabling:1 beat:2 immediate:4 payoff:3 vlassis:3 communication:5 team:1 police:1 pair:3 connection:1 established:1 hour:1 beyond:3 usually:1 below:2 sparsity:2 program:15 max:1 including:1 memory:3 belief:159 natural:1 difficulty:1 nth:2 scheme:1 improve:7 technology:2 lossless:2 mdps:1 columbia:1 isn:1 faced:1 deviate:1 prior:2 vancouver:1 nicol:1 tantamount:1 fully:3 multiagent:6 proven:1 agent:68 sufficient:5 s0:13 playing:1 repeat:1 wireless:5 free:4 surprisingly:1 intractably:1 tsitsiklis:1 guide:1 institute:2 taking:7 sparse:1 distributed:2 ghz:1 depth:2 world:4 transition:7 ending:1 doesn:1 qn:3 valid:2 commonly:3 polynomially:1 bb:25 approximate:2 observable:4 compact:2 sequentially:1 uai:1 bne:3 reorder:1 don:1 search:5 continuous:1 table:2 nature:1 symmetry:1 whiteson:1 expansion:3 constructing:1 domain:3 substituted:1 protocol:1 main:1 dense:1 linearly:1 terminated:1 big:1 backup:8 profile:1 peltonen:1 benefited:1 georgia:2 precision:1 sub:2 montemerlo:2 explicit:1 xh:2 exponential:5 wish:1 lie:1 third:6 interleaving:1 british:1 transitioning:1 bad:1 specific:3 evidence:1 naively:2 exists:1 albeit:1 execution:1 push:1 horizon:9 simply:1 explore:2 infinitely:1 likely:1 expressed:1 partially:3 recommendation:3 maa:1 nested:3 corresponds:4 determines:1 acm:2 conditional:1 viewed:1 goal:1 towards:4 shared:1 replace:2 feasible:1 change:4 tiger:4 infinite:4 except:3 lemma:1 conservative:1 invariance:1 player:3 attempted:1 exception:1 formally:3 college:2 select:1 arises:1 statespace:1 tested:1 correlated:3 |
4,581 | 5,145 | Convergence of Monte Carlo Tree Search in
Simultaneous Move Games
Viliam Lis?y1
Vojt?ech Kova?r??k1
Marc Lanctot2
Branislav Bo?sansk?y1
2
1
Department of Knowledge Engineering
Maastricht University, The Netherlands
marc.lanctot
@maastrichtuniversity.nl
Agent Technology Center
Dept. of Computer Science and Engineering
FEE, Czech Technical University in Prague
<name>.<surname>
@agents.fel.cvut.cz
Abstract
We study Monte Carlo tree search (MCTS) in zero-sum extensive-form games
with perfect information and simultaneous moves. We present a general template of MCTS algorithms for these games, which can be instantiated by various
selection methods. We formally prove that if a selection method is -Hannan consistent in a matrix game and satisfies additional requirements on exploration, then
the MCTS algorithm eventually converges to an approximate Nash equilibrium
(NE) of the extensive-form game. We empirically evaluate this claim using regret
matching and Exp3 as the selection methods on randomly generated games and
empirically selected worst case games. We confirm the formal result and show
that additional MCTS variants also converge to approximate NE on the evaluated
games.
1
Introduction
Non-cooperative game theory is a formal mathematical framework for describing behavior of interacting self-interested agents. Recent interest has brought significant advancements from the algorithmic perspective and new algorithms have led to many successful applications of game-theoretic
models in security domains [1] and to near-optimal play of very large games [2]. We focus on an
important class of two-player, zero-sum extensive-form games (EFGs) with perfect information and
simultaneous moves. Games in this class capture sequential interactions that can be visualized as a
game tree. The nodes correspond to the states of the game, in which both players act simultaneously.
We can represent these situations using the normal form (i.e., as matrix games), where the values
are computed from the successor sub-games. Many well-known games are instances of this class,
including card games such as Goofspiel [3, 4], variants of pursuit-evasion games [5], and several
games from general game-playing competition [6].
Simultaneous-move games can be solved exactly in polynomial time using the backward induction
algorithm [7, 4], recently improved with alpha-beta pruning [8, 9]. However, the depth-limited
search algorithms based on the backward induction require domain knowledge (an evaluation function) and computing the cutoff conditions requires linear programming [8] or using a double-oracle
method [9], both of which are computationally expensive. For practical applications and in situations
with limited domain knowledge, variants of simulation-based algorithms such as Monte Carlo Tree
Search (MCTS) are typically used in practice [10, 11, 12, 13]. In spite of the success of MCTS and
namely its variant UCT [14] in practice, there is a lack of theory analyzing MCTS outside two-player
perfect-information sequential games. To the best of our knowledge, no convergence guarantees are
known for MCTS in games with simultaneous moves or general EFGs.
1
Figure 1: A game tree of a game with perfect information and simultaneous moves. Only the leaves
contain the actual rewards; the remaining numbers are the expected reward for the optimal strategy.
In this paper, we present a general template of MCTS algorithms for zero-sum perfect-information
simultaneous move games. It can be instantiated using any regret minimizing procedure for matrix
games as a function for selecting the next actions to be sampled. We formally prove that if the algorithm uses an -Hannan consistent selection function, which assures attempting each action infinitely
many times, the MCTS algorithm eventually converges to a subgame perfect -Nash equilibrium of
the extensive form game. We empirically evaluate this claim using two different -Hannan consistent procedures: regret matching [15] and Exp3 [16]. In the experiments on randomly generated and
worst case games, we show that the empirical speed of convergence of the algorithms based on our
template is comparable to recently proposed MCTS algorithms for these games. We conjecture that
many of these algorithms also converge to -Nash equilibrium and that our formal analysis could be
extended to include them.
2
Definitions and background
A finite zero-sum game with perfect information and simultaneous moves can be described by a
tuple (N , H, Z, A, T , u1 , h0 ), where N = {1, 2} contains player labels, H is a set of inner states
and Z denotes the terminal states. A = A1 ? A2 is the set of joint actions of individual players and
we denote A1 (h) = {1 . . . mh } and A2 (h) = {1 . . . nh } the actions available to individual players
in state h ? H. The transition function T : H ? A1 ? A2 7? H ? Z defines the successor state given
a current state and actions for both players. For brevity, we sometimes denote T (h, i, j) ? hij .
The utility function u1 : Z 7? [vmin , vmax ] ? R gives the utility of player 1, with vmin and vmax
denoting the minimum and maximum possible utility respectively. Without loss of generality we
assume vmin = 0, vmax = 1, and ?z ? Z, u2 (z) = 1 ? u1 (z). The game starts in an initial state h0 .
A matrix game is a single-stage simultaneous move game with action sets A1 and A2 . Each entry
in the matrix M = (aij ) where (i, j) ? A1 ? A2 and aij ? [0, 1] corresponds to a payoff (to player
1) if row i is chosen by player 1 and column j by player 2. A strategy ?q ? ?(Aq ) is a distribution
over the actions in Aq . If ?1 is represented as a row vector and ?2 as a column vector, then the
expected value to player 1 when both players play with these strategies is u1 (?1 , ?2 ) = ?1 M ?2 .
Given a profile ? = (?1 , ?2 ), define the utilities against best response strategies to be u1 (br, ?2 ) =
max?10 ??(A1 ) ?10 M ?2 and u1 (?1 , br) = min?20 ??(A2 ) ?1 M ?20 . A strategy profile (?1 , ?2 ) is an
-Nash equilibrium of the matrix game M if and only if
u1 (br, ?2 ) ? u1 (?1 , ?2 ) ?
u1 (?1 , ?2 ) ? u1 (?1 , br) ?
and
(1)
Two-player perfect information games with simultaneous moves are sometimes appropriately called
stacked matrix games because at every state h each joint action from set A1 (h) ? A2 (h) either leads
to a terminal state or to a subgame which is itself another stacked matrix game (see Figure 1).
A behavioral strategy for player q is a mapping from states h ? H to a probability distribution over
the actions Aq (h), denoted ?q (h). Given a profile ? = (?1 , ?2 ), define the probability of reaching
a terminal state z under ? as ? ? (z) = ?1 (z)?2 (z), where each ?q (z) is a product of probabilities of
the actions taken by player q along the path to z. Define ?q to be the set of behavioral strategies for
player q. Then for any strategy profile ? = (?1 , ?2 ) ? ?1 ? ?2 we define the expected utility of the
strategy profile (for player 1) as
X
u(?) = u(?1 , ?2 ) =
? ? (z)u1 (z)
(2)
z?Z
2
An -Nash equilibrium profile (?1 , ?2 ) in this case is defined analogously to (1). In other words,
none of the players can improve their utility by more than by deviating unilaterally. If the strategies
are an -NE in each subgame starting in an arbitrary game state, the equilibrium strategy is termed
subgame perfect. If ? = (?1 , ?2 ) is an exact Nash equilibrium (i.e., -NE with = 0), then we
denote the unique value of the game v h0 = u(?1 , ?2 ). For any h ? H, we denote v h the value of
the subgame rooted in state h.
3
Simultaneous move Monte-Carlo Tree Search
Monte Carlo Tree Search (MCTS) is a simulation-based state space search algorithm often used
in game trees. The nodes in the tree represent game states. The main idea is to iteratively run
simulations to a terminal state, incrementally growing a tree rooted at the initial state of the game. In
its simplest form, the tree is initially empty and a single leaf is added each iteration. Each simulation
starts by visiting nodes in the tree, selecting which actions to take based on a selection function and
information maintained in the node. Consequently, it transitions to the successor states. When a
node is visited whose immediate children are not all in the tree, the node is expanded by adding a
new leaf to the tree. Then, a rollout policy (e.g., random action selection) is applied from the new
leaf to a terminal state. The outcome of the simulation is then returned as a reward to the new leaf
and the information stored in the tree is updated.
In Simultaneous Move MCTS (SM-MCTS), the main difference is that a joint action of both players
is selected. The algorithm has been previously applied, for example in the game of Tron [12], Urban
Rivals [11], and in general game-playing [10]. However, guarantees of convergence to NE remain
unknown. The convergence to a NE depends critically on the selection and update policies applied,
which are even more non-trivial than in purely sequential games. The most popular selection policy
in this context (UCB) performs very well in some games [12], but Shafiei et al. [17] show that it
does not converge to Nash equilibrium, even in a simple one-stage simultaneous move game. In this
paper, we focus on variants of MCTS, which provably converge to (approximate) NE; hence we do
not discuss UCB any further. Instead, we describe variants of two other selection algorithms after
explaining the abstract SM-MCTS algorithm.
Algorithm 1 describes a single simulation of SM-MCTS. T represents the MCTS tree in which
each state is represented by one node. Every node h maintains a cumulative reward sum over all
simulations through it, Xh , and a visit count nh , both initially set to 0. As depicted in Figure 1,
a matrix of references to the children is maintained at each inner node. The critical parts of the
algorithm are the updates on lines 8 and 14 and the selection on line 10. Each variant below will
describe a different way to select an action and update a node. The standard way of defining the
value to send back is RetVal(u1 , Xh , nh ) = u1 , but we discuss also RetVal(u1 , Xh , nh ) = Xh /nh ,
which is required for the formal analysis in Section 4. We denote this variant of the algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
SM-MCTS(node h)
if h ? Z then return u1 (h)
else if h ? T and ?(i, j) ? A1 (h) ? A2 (h) not previously selected then
Choose one of the previously unselected (i, j) and h0 ? T (h, i, j)
Add h0 to T
u1 ? Rollout(h0 )
Xh0 ? Xh0 + u1 ; nh0 ? nh0 + 1
Update(h, i, j, u1 )
return RetVal(u1 , Xh0 , nh0 )
(i, j) ? Select(h)
h0 ? T (h, i, j)
u1 ? SM-MCTS(h0 )
Xh ? Xh + u1 ; nh ? nh + 1
Update(h, i, j, u1 )
return RetVal(u1 , Xh , nh )
Algorithm 1: Simultaneous Move Monte Carlo Tree Search
3
with additional ?M? for mean. Algorithm 1 and the variants below are expressed from player 1?s
perspective. Player 2 does the same except using negated utilities.
3.1
Regret matching
This variant applies regret-matching [15] to the current estimated matrix game at each stage. Suppose iterations are numbered from s ? {1, 2, 3, ? ? ? } and at each iteration and each inner node h there
is a mixed strategy ? s (h) used by each player, initially set to uniform random: ? 0 (h, i) = 1/|A(h)|.
Each player maintains a cumulative regret rh [i] for having played ? s (h) instead of i ? A1 (h). The
values are initially set to 0.
On iteration s, the Select function (line 10 in Algorithm 1) first builds the player?s current strategies
from the cumulative regret. Define x+ = max(x, 0),
? s (h, a) =
X
rh+ [a]
1
+
+
rh+ [i].
if
R
>
0
oth.
,
where
R
=
sum
sum
+
|A1 (h)|
Rsum
i?A (h)
(3)
1
The strategy is computed by assigning higher weight proportionally to actions based on the regret of
having not taken them over the long-term. To ensure exploration, an ?-on-policy sampling procedure
is used choosing action i with probability ?/|A(h)| + (1 ? ?)? s (h, i), for some ? > 0.
The Updates on lines 8 and 14 add regret accumulated at the iteration to the regret tables rh . Suppose
joint action (i1 , j2 ) is sampled from the selection policy and utility u1 is returned from the recursive
call on line 12. Define x(h, i, j) = Xhij if (i, j) 6= (i1 , j2 ), or u1 otherwise. The updates to the
regret are:
?i0 ? A1 (h), rh [i0 ] ? rh [i0 ] + (x(h, i0 , j) ? u1 ).
3.2
Exp3
In Exp3 [16], a player maintains an estimate of the sum of rewards, denoted xh,i , and visit counts
nh,i for each of their actions i ? A1 . The joint action selected on line 10 is composed of an action
independently selected for each player. The probability of sampling action a in Select is
?
(1 ? ?) exp(?wh,a )
?
, where ? =
and wh,i = xh,i 1 .
? s (h, a) = P
+
|A
(h)|
|A
exp(?w
)
1
1 (h)|
h,i
i?A1 (h)
(4)
The Update after selecting actions (i, j) and obtaining a result (u1 , u2 ) updates the visits count
(nh,i ? nh,i + 1) and adds to the corresponding reward sum estimates the reward divided by the
probability that the action was played by the player (xh,i ? xh,i + u1 /? s (h, i)). Dividing the value
by the probability of selecting the corresponding action makes xh,i estimate the sum of rewards over
all iterations, not only the once where action i was selected.
4
Formal analysis
We focus on the eventual convergence to approximate NE, which allows us to make an important
simplification: We disregard the incremental building of the tree and assume we have built the
complete tree. We show that this will eventually happen with probability 1 and that the statistics
collected during the tree building phase cannot prevent the eventual convergence.
The main idea of the proof is to show that the algorithm will eventually converge close to the optimal
strategy in the leaf nodes and inductively prove that it will converge also in higher levels of the tree.
In order to do that, after introducing the necessary notation, we start by analyzing the situation in
simple matrix games, which corresponds mainly to the leaf nodes of the tree. In the inner nodes of
the tree, the observed payoffs are imprecise because of the stochastic nature of the selection functions
and bias caused by exploration, but the error can be bounded. Hence, we continue with analysis of
repeated matrix games with bounded error. Finally, we compose the matrices with bounded errors in
1
In practice, we set wh,i = xh,i ?maxi0 ?A1 (h) xh,i0 since exp(xh,i ) can easily cause numerical overflows.
This reformulation computes the same values as the original algorithm but is more numerically stable.
4
a multi-stage setting to prove convergence guarantees of SM-MCTS. Any proofs that are omitted in
the paper are included in the appendix available in the supplementary material and on http://arxiv.org
(arXiv:1310.8613).
4.1
Notation and definitions
Consider a repeatedly played matrix game where at time s players 1 and 2 choose actions is and js
respectively. We will use the convention (|A1 |, |A2 |) = (m, n). Define
G(t) =
t
X
ais js ,
s=1
g(t) =
1
G(t),
t
and
Gmax (t) = max
i?A1
t
X
aijs ,
s=1
where G(t) is the cumulative payoff, g(t) is the average payoff, and Gmax is the maximum cumulative payoff over all actions, each to player 1 and at time t. We also denote gmax (t) = Gmax (t)/t
and by R(t) = Gmax (t) ? G(t) and r(t) = gmax (t) ? g(t) the cumulative and average regrets.
For actions i of player 1 and j of player 2, we denote ti , tj the number of times these actions were
chosen up to the time t and tij the number of times both of these actions has been chosen at once.
By empirical frequencies we mean the strategy profile (?
?1 (t), ?
?2 (t)) ? h0, 1im ?h0, 1in given by
the formulas ?
?1 (t, i) = ti /t, ?
?2 (t, j) = tj /t. By average strategies, we mean the strategy profile
Pt
Pt
(?
?1 (t), ?
?2 (t)) given by the formulas ?
?1 (t, i) = s=1 ?1s (i)/t, ?
?2 (t, j) = s=1 ?2s (j)/t, where ?1s ,
?2s are the strategies used at time s.
Definition 4.1. We say that a player is -Hannan-consistent if, for any payoff sequences (e.g.,
against any opponent strategy), lim supt?? , r(t) ? holds almost surely. An algorithm A is Hannan consistent, if a player who chooses his actions based on A is -Hannan consistent.
Hannan consistency (HC) is a commonly studied property in the context of online learning in repeated (single stage) decisions. In particular, RM and variants of Exp3 has been shown to be Hannan
consistent in matrix games [15, 16]. In order to ensure that the MCTS algorithm will eventually visit
each node infinitely many times, we need the selection function to satisfy the following property.
Definition 4.2. We say that A is an algorithm with guaranteed exploration, if for players 1 and 2
both using A for action selection limt?? tij = ? holds almost surely ?(i, j) ? A1 ? A2 .
Note that most of the HC algorithms, namely RM and Exp3, guarantee exploration without any
modification. If there is an algorithm without this property, it can be adjusted the following way.
Definition 4.3. Let A be an algorithm used for choosing action in a matrix game M . For fixed
exploration parameter ? ? (0, 1) we define a modified algorithm A? as follows: In each time,
with probability (1 ? ?) run one iteration of A and with probability ? choose the action randomly
uniformly over available actions, without updating any of the variables belonging to A.
4.2
Repeated matrix games
First we show that the -Hannan consistency is not lost due to the additional exploration.
Lemma 4.4. Let A be an -Hannan consistent algorithm. Then A? is an ( + ?)-Hannan consistent
algorithm with guaranteed exploration.
In previous works on MCTS in our class of games, RM variants generally suggested using the
average strategy and Exp3 variants the empirical frequencies to obtain the strategy to be played.
The following lemma says there eventually is no difference between the two.
Lemma 4.5. As t approaches infinity, the empirical frequencies and average strategies will almost
surely be equal. That is, lim supt?? maxi?A1 |?
?1 (t, i) ? ?
?1 (t, i)| = 0 holds with probability 1.
The proof is a consequence of the martingale version of Strong Law of Large Numbers.
It is well known that two Hannan consistent players will eventually converge to NE (see [18, p. 11]
and [19]). We prove a similar result for the approximate versions of the notions.
Lemma 4.6. Let > 0 be a real number. If both players in a matrix game with value v are -Hannan
consistent, then the following inequalities hold for the empirical frequencies almost surely:
lim sup u (br, ?
?2 (t)) ? v + 2 and lim inf u (?
?1 (t), br) ? v ? 2.
(5)
t??
t??
5
The proof shows that if the value caused by the empirical frequencies was outside of the interval
infinitely many times with positive probability, it would be in contradiction with definition of -HC.
The following corollary is than a direct consequence of this lemma.
Corollary 4.7. If both players in a matrix game are -Hannan consistent, then there almost surely
exists t0 ? N, such that for every t ? t0 the empirical frequencies and average strategies form
(4 + ?)-equilibrium for arbitrarly small ? > 0.
The constant 4 is caused by going from a pair of strategies with best responses within 2 of the game
value guaranteed by Lemma 4.6 to the approximate NE, which multiplies the distance by two.
4.3
Repeated matrix games with bounded error
After defining the repeated games with error, we present a variant of Lemma 4.6 for these games.
Definition 4.8. We define M (t) = (aij (t)) to be a game, in which if players chose actions i and
j, they receive randomized payoffs aij (t, (i1 , ...it?1 ), (j1 , ...jt?1 )). We will denote these simply
as aij (t), but in fact they are random variables with values in [0, 1] and their distribution in time
t depends on the previous choices of actions. We say that M (t) = (aij (t)) is a repeated game
with error ?, if there is a matrix game M = (aij ) and almost surely exists t0 ? N, such that
|aij (t) ? aij | < ? holds for all t ? t0 .
P
In this context, we will denote G(t) = s?{1...t} ais js (s) etc. and use tilde for the corresponding
? = P ai j etc.). Symbols v and u (?, ?) will still be used with respect
variables without errors (G(t)
s s
to M without errors. The following lemma states that even with the errors, -HC algorithms still
converge to an approximate NE of the game.
Lemma 4.9. Let > 0 and c ? 0. If M (t) is a repeated game with error c and both players are
-Hannan consistent then the following inequalities hold almost surely:
lim sup u (br, ?
?2 ) ? v + 2(c + 1), lim inf u (?
?1 , br) ? v ? 2(c + 1)
(6)
t??
t??
and v ? (c + 1) ? lim inf g(t) ? lim sup g(t) ? v + (c + 1).
t??
(7)
t??
The proof is similar to the proof of Lemma 4.6. It needs an additional claim that if the algorithm is
-HC with respect to the observed values with errors, it still has a bounded regret with respect to the
exact values. In the same way as in the previous subsection, a direct consequence of the lemma is
the convergence to an approximate Nash equilibrium.
Theorem 4.10. Let , c > 0 be real numbers. If M (t) is a repeated game with error c and both
players are -Hannan consistent, then for any ? > 0 there almost surely exists t0 ? N, such that for
all t ? t0 the empirical frequencies form (4(c + 1) + ?)-equilibrium of the game M .
4.4
Perfect-information extensive-form games with simultaneous moves
Now we have all the necessary components to prove the main theorem.
Theorem 4.11. Let M h h?H be a game with perfect information and simultaneous moves with
maximal depth D. Then for every -Hannan consistent algorithm A with guaranteed exploration
and arbitrary small ? > 0, there almost surely exists t0 , so that the average strategies (?
?1 (t), ?
?2 (t))
form a subgame perfect
2D2 + ? -Nash equilibrium for all t ? t0 .
Once we have established the convergence of the -HC algorithms in games with errors, we can
proceed by induction. The games in the leaf nodes are simple matrix game so they will eventually
converge and they will return the mean reward values in a bounded distance from the actual value
of the game (Lemma 4.9 with c = 0). As a result, in the level just above the leaf nodes, the HC algorithms are playing a matrix game with a bounded error and by Lemma 4.9, they will also
eventually return the mean values within a bounded interval. On level d from the leaf nodes, the
errors of returned values will be in the order of d and players can gain 2d by deviating. Summing
the possible gain of deviations on each level leads to the bound in the theorem. The subgame
perfection of the equilibrium results from the fact that for proving the bound on approximation in the
whole game (i.e., in the root of the game tree), a smaller bound on approximation of the equilibrium
is proven for all subgames in the induction. The formal proof is presented in the appendix.
6
BF = 2
BF = 3
BF = 5
0.01
0.10
0.1
Exploitability
0.05
0.10
0.01
Method
RM
RMM
10
1000
10
Depth = 2
0.400
1000
t
10
Depth = 3
1000
Depth = 4
0.200
0.100
0.1
Exploitability
0.01
0.2
0.10
0.050
0.025
100
10000
100
t
10000
100
10000
Figure 2: Exploitability of strategies given by the empirical frequencies of Regret matching with
propagating values (RM) and means (RMM) for various depths and branching factors.
5
Empirical analysis
In this section, we first evaluate the influence of propagating the mean values instead of the current
sample value in MCTS to the speed of convergence to Nash equilibrium. Afterwards, we try to
assess the convergence rate of the algorithms in the worst case. In most of the experiments, we
use as the bases of the SM-MCTS algorithm Regret matching as the selection strategy, because a
superior convergence rate bound is known for this algorithm and it has been reported to be very
successful also empirically in [20]. We always use the empirical frequencies to create the evaluated
strategy and measure the exploitability of the first player?s strategy (i.e., v h0 ? u(?
?1 , br)).
5.1
Influence of propagation of the mean
The formal analysis presented in the previous section requires the algorithms to return the mean of
all the previous samples instead of the value of the current sample. The latter is generally the case in
previous works on SM-MCTS [20, 11]. We run both variants with the Regret matching algorithm on
a set of randomly generated games parameterized by depth and branching factor. Branching factor
was always the same for both players. For the following experiments, the utility values are randomly
selected uniformly from interval h0, 1i. Each experiment uses 100 random games and 100 runs of
the algorithm.
Figure 2 presents how the exploitability of the strategies produced by Regret matching with propagation of the mean (RMM) and current sample value (RM) develops with increasing number of
iterations. Note that both axes are in logarithmic scale. The top graph is for depth of 2, different branching factors (BF) and ? ? {0.05, 0.1, 0.2}. The bottom one presents different depths for
BF = 2. The results show that both methods converge to the approximate Nash equilibrium of the
game. RMM converges slightly slower in all cases. The difference is very small in small games, but
becomes more apparent in games with larger depth.
5.2
Empirical convergence rate
Although the formal analysis guarantees the convergence to an -NE of the game, the rate of the convergence is not given. Therefore, we give an empirical analysis of the convergence and specifically
focus on the cases that reached the slowest convergence from a set of evaluated games.
7
Exploitability
0.8000
0.4000
0.2000
0.1000
0.0500
0.0250
0.0125
WC_RM
WC_RMM
Method
Exp3
Exp3M
RM
RMM
1e+02
1e+04
1e+06
1e+02
t
1e+04
1e+06
Figure 3: The games with maximal exploitability after 1000 iterations with RM (left) and RMM
(right) and the corresponding exploitabililty for all evaluated methods.
We have performed a brute force search through all games of depth 2 with branching factor 2 and
utilities form the set {0, 0.5, 1}. We made 100 runs of RM and RMM with exploration set to ? =
0.05 for 1000 iterations and computed the mean exploitability of the strategy. The games with the
highest exploitability for each method are presented in Figure 3. These games are not guaranteed to
be the exact worst case, because of possible error caused by only 100 runs of the algorithm, but they
are representatives of particularly difficult cases for the algorithms. In general, the games that are
most difficult for one method are difficult also for the other. Note that we systematically searched
also for games in which RMM performs better than RM, but this was never the case with sufficient
number of runs of the algorithms in the selected games.
Figure 3 shows the convergence of RM and Exp3 with propagating the current sample values and
the mean values (RMM and Exp3M) on the empirically worst games for the RM variants. The RM
variants converge to the minimal achievable values (0.0119 and 0.0367) after a million iterations.
This values corresponds exactly to the exploitability of the optimal strategy combined with the uniform exploration with probability 0.05. The Exp3 variants most likely converge to the same values,
however, they did not fully make it in the first million iterations in WC RM. The convergence rate of
all the variants is similar and the variants with propagating means always converge a little slower.
6
Conclusion
We present the first formal analysis of convergence of MCTS algorithms in zero-sum extensive-form
games with perfect information and simultaneous moves. We show that any -Hannan consistent
algorithm can be used to create a MCTS algorithm that provably converges to an approximate Nash
equilibrium of the game. This justifies the usage of the MCTS as an approximation algorithm for
this class of games from the perspective of algorithmic game theory. We complement the formal
analysis with experimental evaluation that shows that other MCTS variants for this class of games,
which are not covered by the proof, also converge to the approximate NE of the game. Hence, we
believe that the presented proofs can be generalized to include these cases as well. Besides this, we
will focus our future research on providing finite time convergence bounds for these algorithms and
generalizing the results to more general classes of extensive-form games with imperfect information.
Acknowledgments
This work is partially funded by the Czech Science Foundation (grant no. P202/12/2054), the Grant
Agency of the Czech Technical University in Prague (grant no. OHK3-060/12), and the Netherlands
Organisation for Scientific Research (NWO) in the framework of the project Go4Nature, grant number 612.000.938. The access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum, provided under the programme ?Projects
of Large Infrastructure for Research, Development, and Innovations? (LM2010005) is appreciated.
8
References
[1] Manish Jain, Dmytro Korzhyk, Ondrej Vanek, Vincent Conitzer, Michal Pechoucek, and Milind Tambe. A
double oracle algorithm for zero-sum security games. In Tenth International Conference on Autonomous
Agents and Multiagent Systems (AAMAS 2011), pages 327?334, 2011.
[2] Michael Johanson, Nolan Bard, Neil Burch, and Michael Bowling. Finding optimal abstract strategies in
extensive-form games. In Proceedings of the Twenty-Sixth Conference on Artificial Intelligence (AAAI12), pages 1371?1379, 2012.
[3] S. M. Ross. Goofspiel ? the game of pure strategy. Journal of Applied Probability, 8(3):621?625, 1971.
[4] Glenn C. Rhoads and Laurent Bartholdi. Computer solution to the game of pure strategy. Games,
3(4):150?156, 2012.
[5] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In In Proceedings of the Eleventh International Conference on Machine Learning (ICML-1994), pages 157?163.
Morgan Kaufmann, 1994.
[6] M. Genesereth and N. Love. General game-playing: Overview of the AAAI competition. AI Magazine,
26:62?72, 2005.
[7] Michael Buro. Solving the Oshi-Zumo game. In Proceedings of Advances in Computer Games 10, pages
361?366, 2003.
[8] Abdallah Saffidine, Hilmar Finnsson, and Michael Buro. Alpha-beta pruning for games with simultaneous
moves. In Proceedings of the Thirty-Second Conference on Artificial Intelligence (AAAI-12), pages 556?
562, 2012.
[9] Branislav Bosansky, Viliam Lisy, Jiri Cermak, Roman Vitek, and Michal Pechoucek. Using double-oracle
method and serialized alpha-beta search for pruning in simultaneous moves games. In Proceedings of the
Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), pages 48?54, 2013.
[10] H. Finnsson and Y. Bj?ornsson. Simulation-based approach to general game-playing. In The Twenty-Third
AAAI Conference on Artificial Intelligence, pages 259?264. AAAI Press, 2008.
[11] Olivier Teytaud and S?ebastien Flory. Upper confidence trees with short term partial information. In
Applications of Eolutionary Computation (EvoApplications 2011), Part I, volume 6624 of LNCS, pages
153?162, Berlin, Heidelberg, 2011. Springer-Verlag.
[12] Pierre Perick, David L. St-Pierre, Francis Maes, and Damien Ernst. Comparison of different selection
strategies in monte-carlo tree search for the game of Tron. In Proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG), pages 242?249, 2012.
[13] Hilmar Finnsson. Simulation-Based General Game Playing. PhD thesis, Reykjavik University, 2012.
[14] L. Kocsis and C. Szepesv?ari. Bandit-based Monte Carlo planning. In 15th European Conference on
Machine Learning, volume 4212 of LNCS, pages 282?293, 2006.
[15] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica,
68(5):1127?1150, 2000.
[16] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[17] M. Shafiei, N. R. Sturtevant, and J. Schaeffer. Comparing UCT versus CFR in simultaneous games. In
Proceeding of the IJCAI Workshop on General Game-Playing (GIGA), pages 75?82, 2009.
[18] Kevin Waugh. Abstraction in large extensive games. Master?s thesis, University of Alberta, 2009.
[19] A. Blum and Y. Mansour. Learning, regret minimization, and equilibria. In Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory, chapter 4. Cambridge
University Press, 2007.
[20] Marc Lanctot, Viliam Lis?y, and Mark H.M. Winands. Monte Carlo tree search in simultaneous move
games with applications to Goofspiel. In Workshop on Computer Games at IJCAI, 2013.
9
| 5145 |@word version:2 achievable:1 polynomial:1 bf:5 d2:1 simulation:9 maes:1 initial:2 contains:1 selecting:4 denoting:1 current:7 comparing:1 michal:2 assigning:1 numerical:1 happen:1 j1:1 update:9 intelligence:5 selected:8 advancement:1 leaf:10 vmin:3 serialized:1 short:1 infrastructure:2 node:19 org:1 teytaud:1 mathematical:1 along:1 rollout:2 direct:2 beta:3 jiri:1 prove:6 compose:1 eleventh:1 behavioral:2 expected:3 behavior:1 love:1 growing:1 multi:2 planning:1 terminal:5 alberta:1 actual:2 little:1 bartholdi:1 increasing:1 becomes:1 project:3 provided:1 notation:2 bounded:8 finding:1 guarantee:5 every:4 act:1 ti:2 exactly:2 rm:14 brute:1 grant:4 conitzer:1 positive:1 engineering:2 consequence:3 analyzing:2 laurent:1 path:1 chose:1 studied:1 limited:2 tambe:1 practical:1 unique:1 acknowledgment:1 thirty:1 practice:3 regret:18 recursive:1 lost:1 subgame:7 procedure:4 lncs:2 empirical:13 matching:8 imprecise:1 word:1 confidence:1 numbered:1 spite:1 cannot:1 close:1 selection:16 storage:1 context:3 influence:2 branislav:2 center:1 send:1 starting:1 independently:1 pure:2 contradiction:1 unilaterally:1 his:1 proving:1 notion:1 autonomous:1 updated:1 tardos:1 pt:2 play:2 suppose:2 magazine:1 exact:3 programming:1 olivier:1 us:2 expensive:1 particularly:1 updating:1 cooperative:1 observed:2 bottom:1 solved:1 capture:1 worst:5 eva:1 highest:1 agency:1 nash:12 reward:9 littman:1 inductively:1 econometrica:1 solving:1 purely:1 easily:1 joint:6 mh:1 various:2 represented:2 chapter:1 stacked:2 instantiated:2 ech:1 describe:2 jain:1 monte:9 artificial:4 kevin:1 outside:2 h0:12 outcome:1 choosing:2 whose:1 apparent:1 supplementary:1 larger:1 say:4 otherwise:1 nolan:1 statistic:1 neil:1 itself:1 online:1 kocsis:1 sequence:1 cig:1 interaction:1 product:1 maximal:2 j2:2 ernst:1 fel:1 competition:2 convergence:22 double:3 requirement:1 empty:1 pechoucek:2 ijcai:3 perfect:13 converges:4 incremental:1 tim:1 damien:1 hilmar:2 propagating:4 vitek:1 dividing:1 strong:1 convention:1 stochastic:1 exploration:11 finnsson:3 successor:3 material:1 require:1 surname:1 im:1 adjusted:1 hold:6 normal:1 exp:3 equilibrium:19 algorithmic:3 mapping:1 bj:1 claim:3 a2:10 omitted:1 label:1 visited:1 nwo:1 ross:1 create:2 minimization:1 brought:1 always:3 supt:2 modified:1 reaching:1 johanson:1 corollary:2 ax:1 focus:5 mainly:1 slowest:1 waugh:1 abstraction:1 accumulated:1 i0:5 typically:1 initially:4 bandit:2 going:1 interested:1 i1:3 provably:2 denoted:2 multiplies:1 development:1 equal:1 once:3 never:1 having:2 sampling:2 represents:1 icml:1 future:1 develops:1 roman:1 randomly:5 composed:1 simultaneously:1 national:1 individual:2 deviating:2 phase:1 interest:1 subgames:1 evaluation:2 nl:1 winands:1 oth:1 tj:2 tuple:1 gmax:6 partial:1 necessary:2 tree:27 minimal:1 instance:1 column:2 yoav:1 introducing:1 deviation:1 entry:1 uniform:2 successful:2 colell:1 stored:1 reported:1 chooses:1 combined:1 st:1 international:3 randomized:1 siam:1 michael:5 analogously:1 milind:1 thesis:2 aaai:4 cesa:1 choose:3 genesereth:1 leading:1 return:6 manish:1 li:2 satisfy:1 caused:4 depends:2 nisan:1 performed:1 root:1 try:1 sup:3 reached:1 start:3 francis:1 maintains:3 ass:1 kaufmann:1 who:1 correspond:1 vincent:1 produced:1 critically:1 none:1 carlo:9 nh0:3 simultaneous:21 definition:7 sixth:1 against:2 frequency:9 proof:9 schaeffer:1 sampled:2 gain:2 popular:1 wh:3 knowledge:4 lim:8 subsection:1 auer:1 back:1 ondrej:1 higher:2 response:2 improved:1 evaluated:4 generality:1 just:1 stage:5 uct:2 lack:1 incrementally:1 propagation:2 defines:1 scientific:1 believe:1 name:1 building:2 usage:1 contain:1 facility:1 hence:3 iteratively:1 game:123 self:1 during:1 branching:5 rooted:2 maintained:2 bowling:1 generalized:1 theoretic:1 complete:1 tron:2 performs:2 recently:2 ari:1 superior:1 empirically:5 overview:1 nh:11 million:2 volume:2 numerically:1 significant:1 multiarmed:1 cambridge:1 ai:4 consistency:2 grid:1 aq:3 funded:1 stable:1 access:1 etc:2 add:3 base:1 j:3 recent:1 perspective:3 inf:3 termed:1 verlag:1 inequality:2 success:1 continue:1 morgan:1 minimum:1 additional:5 surely:9 converge:14 afterwards:1 hannan:18 technical:2 reykjavik:1 exp3:10 long:1 divided:1 hart:1 visit:4 a1:18 variant:21 arxiv:2 iteration:12 represent:2 cz:1 sometimes:2 limt:1 receive:1 background:1 szepesv:1 interval:3 else:1 abdallah:1 appropriately:1 prague:2 call:1 near:1 nonstochastic:1 inner:4 idea:2 imperfect:1 br:9 t0:8 utility:10 rmm:9 peter:1 returned:3 proceed:1 cause:1 action:37 repeatedly:1 tij:2 generally:2 proportionally:1 covered:1 dmytro:1 netherlands:2 rival:1 visualized:1 simplest:1 http:1 schapire:1 estimated:1 reformulation:1 blum:1 urban:1 prevent:1 evasion:1 cutoff:1 tenth:1 backward:2 graph:1 sum:12 run:7 parameterized:1 master:1 almost:9 lanctot:2 appendix:2 decision:1 fee:1 comparable:1 bound:5 guaranteed:5 played:4 simplification:1 oracle:3 roughgarden:1 infinity:1 burch:1 wc:1 u1:28 speed:2 min:1 attempting:1 expanded:1 conjecture:1 department:1 belonging:1 remain:1 describes:1 smaller:1 slightly:1 modification:1 taken:2 computationally:1 previously:3 assures:1 describing:1 eventually:9 discus:2 count:3 pursuit:1 available:3 opponent:1 pierre:2 slower:2 original:1 denotes:1 remaining:1 include:2 ensure:2 top:1 k1:1 build:1 overflow:1 move:20 added:1 strategy:37 visiting:1 cvut:1 distance:2 card:1 berlin:1 evaluate:3 collected:1 cfr:1 trivial:1 induction:4 bard:1 besides:1 providing:1 minimizing:1 innovation:1 difficult:3 kova:1 robert:1 hij:1 noam:1 ebastien:1 policy:5 unknown:1 negated:1 twenty:3 upper:1 bianchi:1 markov:1 sm:8 finite:2 immediate:1 situation:3 extended:1 payoff:7 defining:2 tilde:1 y1:2 interacting:1 mansour:1 arbitrary:2 buro:2 david:1 complement:1 namely:2 required:1 pair:1 extensive:9 security:2 czech:3 established:1 suggested:1 below:2 built:1 including:1 max:3 critical:1 force:1 improve:1 technology:1 ne:13 mcts:30 unselected:1 perfection:1 nicol:1 contributing:1 law:1 freund:1 loss:1 fully:1 multiagent:1 sturtevant:1 mixed:1 proven:1 korzhyk:1 versus:1 foundation:1 agent:5 sufficient:1 consistent:16 editor:1 systematically:1 playing:7 maastricht:1 row:2 maxi0:1 aij:9 formal:10 bias:1 appreciated:1 explaining:1 template:3 depth:11 transition:2 cumulative:6 computes:1 commonly:1 made:1 vmax:3 reinforcement:1 adaptive:1 programme:1 party:1 viliam:3 vazirani:1 approximate:11 alpha:3 pruning:3 confirm:1 summing:1 search:12 glenn:1 table:1 nature:1 exploitability:10 obtaining:1 goofspiel:3 heidelberg:1 hc:7 european:1 marc:3 domain:3 did:1 main:4 rh:6 whole:1 profile:8 child:2 repeated:8 aamas:1 representative:1 martingale:1 sub:1 xh:15 third:2 formula:2 theorem:4 jt:1 maxi:1 symbol:1 organisation:1 exists:4 workshop:2 sequential:3 adding:1 vojt:1 phd:1 justifies:1 vijay:1 depicted:1 led:1 logarithmic:1 simply:1 likely:1 infinitely:3 generalizing:1 expressed:1 partially:1 bo:1 u2:2 applies:1 springer:1 corresponds:3 satisfies:1 owned:1 ma:1 consequently:1 eventual:2 included:1 specifically:1 except:1 uniformly:2 lemma:13 called:1 experimental:1 disregard:1 player:44 ucb:2 xh0:3 formally:2 select:4 giga:1 searched:1 mark:1 latter:1 brevity:1 dept:1 correlated:1 |
4,582 | 5,146 | Estimation Bias in Multi-Armed Bandit Algorithms
for Search Advertising
Tao Qin
Microsoft Research Asia
[email protected]
Min Xu
Machine Learning Department
Carnegie Mellon University
[email protected]
Tie-Yan Liu
Microsoft Research Asia
[email protected]
Abstract
In search advertising, the search engine needs to select the most profitable advertisements to display, which can be formulated as an instance of online learning
with partial feedback, also known as the stochastic multi-armed bandit (MAB)
problem. In this paper, we show that the naive application of MAB algorithms
to search advertising for advertisement selection will produce sample selection
bias that harms the search engine by decreasing expected revenue and ?estimation of the largest mean? (ELM) bias that harms the advertisers by increasing
game-theoretic player-regret. We then propose simple bias-correction methods
with benefits to both the search engine and the advertisers.
1
Introduction
Search advertising, also known as sponsored search, has been formulated as a multi-armed bandit
(MAB) problem [11], in which the search engine needs to choose one ad from a pool of candidate to
maximize some objective (e.g., its revenue). To select the best ad from the pool, one needs to know
the quality of each ad, which is usually measured by the probability that a random user will click on
the ad. Stochastic MAB algorithms provide an attractive way to select the high quality ads, and the
regret guarantee on MAB algorithms ensures that we do not display the low quality ads too many
times.
When applied to search advertising, a MAB algorithm needs to not only identify the best ad (suppose
there is only one ad slot for simplicity) but also accurately learn the click probabilities of the top two
ads, which will be used by the search engine to charge a fair fee to the winner advertiser according
to the generalized second price auction mechanism [6]. If the probabilities are estimated poorly, the
search engine may charge too low a payment to the advertisers and lose revenue, or it may charge
too high a payment which would encourage the advertisers to engage in strategic behavior. However,
most existing MAB algorithms only focus on the identification of the best arm; if naively applied to
search advertising, there is no guarantee to get an accurate estimation for the click probabilities of
the top two ads.
Thus, search advertising, with its special model and goals, merits specialized algorithmic design and
analysis while using MAB algorithms. Our work is a step in this direction. We show in particular that
naive ways of combining click probability estimation and MAB algorithms lead to sample selection
bias that harms the search engine?s revenue. We present a simple modification to MAB algorithms
that eliminates such a bias and provably achieves almost the revenue as if an oracle gives us the
actual click probabilities. We also analyze the game theoretic notion of incentive compatibility (IC)
1
and show that low regret MAB algorithms may have worse IC property than high regret uniform
exploration algorithms and that a trade-off may be required.
2
Setting
Each time an user visits a webpage, which we call an impression, the search engine runs a generalized second price (SP) auction [6] to determine which ads to show to the user and how much to
charge advertisers if their ads are clicked. We will in this paper suppose that we have only one ad
slot in which we can display one ad. The multiple slot setting is more realistic but also much more
complicated to analyze; we leave the extension as future work. In the single slot case, generalized
SP auction becomes simply the well known second price auction, which we describe below.
Assume there are n ads. Let bk denote the bid of advertiser k (or the ad k), which is the maximum
amount of money advertiser k is willing to pay for a click, and ?k denote the click-through-rate
(CTR) of ad k, which is the probability a random user will click on it. SP auction ranks ads according
to the products of the ad CTRs and bids. Assume that advertisers are numbered by the decreasing
order of bi ?i : b1 ?1 > b2 ?2 > ? ? ? > bn ?n . Then advertiser 1 wins the ad slot, and he/she need to pay
b2 ?2 /?1 for each click on his/her ad. This payment formula is chosen to satisfy the game theoretic
notion of incentive compatibility (see Chapter 9 of [10] for a good introduction). Therefore, the
per-impress expected revenue of SP auction is b2 ?2 .
2.1
A Two-Stage Framework
Since the CTRs are unknown to both advertisers and the search engine, the search engine needs to
estimate them through some learning process. We adopt the same two-stage framework as in [12, 2],
which is composed by a CTR learning stage lasting for the first T impressions and followed by a SP
auction stage lasting for the second Tend ? T impressions.
1. Advertisers 1, ..., n submit bids b1 , ..., bn .
2. CTR learning stage:
For each impression t = 1, ..., T , display ad kt ? {1, ..., n} using MAB algorithm M.
Estimate ?bi based on the click records from previous stage.
3. SP auction stage:
For t = T + 1, ..., Tend , we run SP auction using estimators ?bi : display ad that maximizes
b ?
b(2)
bk ?bk and charge (2)
bi .
?
b(1) . Here we use (s) to indicate the ad with the s-th largest score bi ?
One can see that in this framework, the estimators ?bi ?s are computed at the end of the first stage
and keep unchanged in the second stage. Recent works [2] suggested one could also run the MAB
algorithm and keep updating the estimators until Tend . However, it is hard to compute a fair payment
when we display ads based using a MAB algorithm rather than the SP auction, and a randomized
payment is proposed in [2]. Their scheme, though theoretically interesting, is impractical because
it is difficult for advertisers to accept a randomized payment rule. We thus adhere to the above
framework and do not update ?bi ?s in the second stage.
It is important to note that in search advertising, we measure the quality of CTR estimators not by
mean-squared error but by criteria important to advertising. One criterion is to the per-impression
expected revenue (defined below) in rounds T + 1, ..., Tend . Two types of estimation errors can
harm the expected revenue: (1) the ranking may be incorrect, i.e. arg maxk bk ?bk 6= arg max bk ?k ,
and (2) the estimators may be biased. Another criterion is incentive compatibility, which is a more
complicated concept and we defer its definition and discussion to Section 4. We do not analyze
the revenue and incentive compatibility properties of the first CTR learning stage because of its
complexity and brief duration; we assume that Tend >> T .
Definition 2.1. Let (1) := arg maxk bk ?bk , (2) := arg maxk6=(1) bk ?bk . We define the perb ?
b(2)
c := ?(1) (2)
and the per-impression expected revenue as
impression empirical revenue as rev
?
b(1)
E[rev]
c where the expectation is taken over the CTR estimators. We define then the per-impression
c where b2 ?2 is the oracle revenue we obtain if we know the
expected revenue loss as b2 ?2 ? E[rev],
true click probabilities.
2
k
Choice of Estimator We will analyze the most straightforward estimator ?bk = C
Tk where Tk is
the number of impression allocated to ad k in the CTR learning stage and Ck is the number of clicks
received by ad k in the CTR learning stage. This estimator is in fact biased and we will later propose
simple improvements.
2.2
Characterizing MAB Algorithms
We analyze two general classes of MAB algorithms: uniform and adaptive. Because there are many
specific algorithms for each class, we give our formal definitions by characterizing Tk , the number
of impressions assigned to each advertiser k at the end of the CTR learning stage.
Definition 2.2. We say that the learning algorithm M is uniform
if, for some constant 0 < c < 1,
for all k, all bid vector b, with probability at least 1 ? O Tn :
c
Tk ? T.
n
We next describe adaptive algorithm which has low regret because it stops allocating impressions to
ad k if it is certain that bk ?k < maxk0 bk0 ?k0 .
Definition 2.3. Let b be a bid vector. We say that a MAB algorithm is adaptive with respect to b,
if, with probability at least 1 ? O Tn , we have that:
2
4b2k
0 bk
T1 ? cTmax and
c 2 ln T ? Tk ? min cTmax , 2 ln T
for all k 6= 1
?k
?k
where ?k = b1 ?1 ? bk ?k and c < 1, c0 are positive constants and Tmax = maxk Tk . For simplicity,
we assume that c here is the same as c in Definition 2.2, we can take the minimum of the two if they
are different.
Both the uniform algorithms and the adaptive algorithms have been used in the search advertising
auctions [5, 7, 12, 2, 8]. UCB (Uniform Confidence Bound) is a simple example of an adaptive
algorithm.
Example 2.1. UCB Algorithm. The UCB algorithm, at round
qt, allocate the impression to the ad
with the largest score, which is defined as sk,t ? bk ?bk,t + ?bk Tk1(t) log T .
where Tk (t) is the number of impressions ad k has received before round t and ?bk,t is the number
of clicks divided by Tk (t) in the history log before round t. ? is a tuning parameter that trades off
exploration and exploitation; the larger ? is, the more UCB resembles uniform algorithms. Some
version of UCB algorithm uses log t instead of log T in the score; this difference is unimportant and
we use the latter form to simplify the proof.
Under the UCB algorithm, it is well known that the Tk ?s satisfy the upper bounds in Definition 2.3.
That the Tk ?s also satisfy the lower bounds is not obvious and has not been previously proved.
Previous analyses of UCB, whose goal is to show low regret, do not need any lower bounds on Tk ?s;
our analysis does require a lower bound because we need to control the accuracy of the estimator
?bk . The following theorem is, to the best of our knowledge, a novel result.
Theorem 2.1. Suppose we run the UCB algorithm with ? ? 4, then the Tk ?s satisfy the bounds
described in Definition 2.3.
The UCB algorithm in practice satisfy the lower bounds even with a smaller ?. We refer the readers
to Theorem 5.1 and Theorem 5.2 of Section 5.1 of the appendix for the proof.
As described in Section 2.1, we form estimators ?bk by dividing the number of clicks by the number
of impressions Tk . The estimator ?bk is not an average of Tk i.i.d Bernoulli random variables because
the size Tk is correlated with ?bk . This is known as the sample selection bias.
Definition 2.4. We define the sample selection bias as E[b
?k ] ? ?k .
We can still make the following concentration of measure statements about ?bk , for which we give a
standard proof in Section 5.1 of the appendix.
3
Lemma 2.1. For any MAB learning algorithm, with probability at least 1 ? O( Tn ), for all t =
1, ..., T , for all k = 1, ..., n, the confidence bound holds.
p
p
?k ? (1/Tk (t)) log T ? ?bk,t ? ?k + (1/Tk (t)) log T
2.3
Related Work
As mentioned before, how to design incentive compatible payment rules when using MAB algorithms to select the best ads has been studied in [2] and [5]. However, their randomized payment
scheme is very different from the current industry standard and is somewhat impractical. The idea
of using MAB algorithms to simultaneously select ads and estimate click probabilities has proposed
in [11], [8] and [13] . However, they either do not analyze estimation quality or do not analyze it
beyond a concentration of measure deviation bound. Our work in contrast shows that it is in fact the
estimation bias that is important in the game theoretic setting. [9] studies the effect of CTR learning
on incentive compatibility from the perspective of an advertiser with imperfect information.
This work is only the first step towards understanding the effect of estimation bias in MAB algorithms for search advertising auctions, and we only focus on a relative simplified setting with only
a single ad slot and without budget constraints, which is already difficult to analyze. We leave the
extensions to multiple ad slots and with budget constraints as future work.
3
Revenue and Sample Selection Bias
In this section, we analyze the impact of a MAB algorithm on the search engine?s revenue. We show
that the direct plug-in of the estimators from a MAB algorithm (either unform or adaptive) will cause
the sample selection bias and damage the search engine?s revenue; we then propose a simple de-bias
method which can ensure the revenue guarantee. Throughout the section, we fix a bid vector b. We
define the notations (1), (2) as (1) := arg maxk ?bk bk and (2) := arg maxk6=(1) ?bk bk .
Before we present our main result, we pause to give some intuition about sample selection bias.
Assume b1 ?1 ? b2 ?2 ... ? bn ?n and suppose we use the UCB algorithm in the learning stage. If
?bk > ?k , then the UCB algorithm will select k more often and thus acquire more click data to
gradually correct the overestimation. If ?bk < ?k however, the UCB algorithm will select k less
often and the underestimation persists. Therefore, E[?k ] < ?k .
3.1
Revenue Analysis
The following theorem is the main result of this section, which shows that the bias of the CTR
estimators can critically affect the search engine?s revenue.
P
max(b21 ,b2k )
adpt
unif
0
:=
:=
Theorem 3.1. Let T0 := 4n
log
T
,
T
5c
log T , and Tmin
2
2
min
k6=1
?
?
1
nb2max
4 c?
2
2
k
log T . Let c be the constant introduced in Definition 2.3 and 2.2.
If T ? T0 , then, for either adaptive or uniform algorithms,
r
n
?1
n
c ? b2 ?2 ? b2 E[b
b2 ?2 ? E[rev]
?2 ]
?O
log T ? O
.
E[b
?1 ]
T
T
adpt
unif
If we use adaptive algorithms and T ? Tmin
or if we use uniform algorithms and T ? Tmin
, then
n
?1
c ? (b2 ?2 ? b2 E[b
b2 ?2 ? E[rev]
?2 ]
)?O
E[b
?1 ]
T
We leave the full proof to Section 5.2 of the appendix and provide a quick sketch here. In the first
adpt
unif
case where T is smaller than thresholds Tmin
or Tmin
, the probability of incorrect ranking, that is,
incorrectly identifying the best ad, is high and we can only use concentration of measure bounds to
control the revenueploss. In the second case, we show that we can almost always identify the best ad
and therefore, the Tn log T error term disappears.
4
?1
The (b2 ?2 ?b2 E[b
?2 ] E[b
?1 ] ) term in the theorem is in general positive because of sample selection bias.
q
1
With bias, the best bound we can get on the expectation E[b
?2 ] is that |E[b
?2 ]??2 | ? O
log
T
,
T2
which is through the concentration inequality (Lemma 2.1).
?2
Remark 3.1. With adaptive learning, T1 is at least the order of O( Tn ) and T12 log T ? c0 b22 . There2
pn
?1
is
at
most
on
the
order
of
1
+
log
T
and
b
?
?
b
E[b
?
]
is
on
the
order
of
O(?).
fore, E[b
2
2
2
2
?1 ]
T
c ? O(?2 ) + O Tn . This bound suggests
Combining these derivations, we get that b2 ?2 ? E[rev]
that the revenue loss does not converge to 0 as T increases. Simulations in Section 5 show that
our bound is in fact tight: the expected revenue loss for adaptive learning, in presence of sample
selection bias, can be large and persistent.
For many common uniform learning algorithms (uniformly random selection for instance) sample
selection bias does not exist and so the expected revenue loss is smaller. This seems to suggest that,
because of sample selection bias, adaptive algorithms are, from a revenue optimization perspective,
inferior. The picture is switched however if we use a debiasing technique such as the one we propose
in section 3.2. When sample selection bias is 0, adaptive algorithms yield better revenue because
it is able to correctly identify the best advertisement with fewer rounds. We make this discussion
concrete with the following results in which we assume a post-learning unbiasedness condition.
Definition 3.1. We say that the post-learning unbiasedness condition holds if for all k, E[b
?k ] = ?k .
This condition does not hold in general, but we provide a simple debiasing procedure in Section 3.2
to ensure that it always does. The following Corollary follows immediately from Theorem 3.1 with
an application of Jensen?s inequality.
adpt
unif
Corollary 3.1. Suppose the post-learning unbiasedness condition holds. Let T0 ? Tmin
? Tmin
be defined as in Theorem 3.1.
pn
c ?O
If we use either adaptive or uniform algorithms and T ? T0 , then b2 ?2 ? E[rev]
T log T .
adpt
unif
If we use adaptive algorithm and T ? Tmin
or if we use uniform algorithm and T ? Tmin
, then
n
c ?O
b2 ?2 ? E[rev]
T
The revenue loss guarantee is much stronger with the unbiasedness, which we confirm in our simulations in Section 5.
p
Corollary 3.1 also shows that the revenue loss drops sharply from Tn log T to Tn once T is larger
than some threshold. Intuitively, this behavior exists because the probability of incorrect ranking becomes negligibly small when T is larger than the threshold. Because the adaptive learning threshold
adpt
unif
Tmin
is always smaller and often much smaller than the uniform learning threshold Tmin
, Corollary 3.1 shows that adaptive learning can guarantee much lower revenue loss when T is between
adpt
unif
Tmin
and Tmin
. It is in fact the same adaptiveness that leads to low regret that also leads to the
strong revenue loss guarantees for adaptive learning algorithms.
3.2
Sample Selection Debiasing
Given a MAB algorithm, one simple meta-algorithm to produce an unbiased estimator where the
Tk ?s still satisfy Definition 2.3 and 2.2 is to maintain ?held-out? click history logs. Instead of
keeping one history log for each advertisement, we will keep two; if the original algorithm allocates
one impression to advertiser k, we will actually allocate two impressions at a time and record the
click result of one of the impressions in the first history log and the click result of the other in the
heldout history log.
When the MAB algorithm requires estimators ?bk ?s or click data to make an allocation, we will allow
it access only to the first history log. The estimator learned from the first history log is biased by
the selection procedure but the heldout history log, since it does not influence the ad selection, can
be used to output an unbiased estimator of each advertisement?s click probability at the end of the
exploration stage. Although this scheme doubles the learning length, sample selection debiasing can
significantly improve the guarantee on expected revenue as shown in both theory and simulations.
5
4
Advertisers? Utilities and ELM Bias
In this section, we analyze the impact of a MAB algorithm on advertisers? utilities. The key result of this section is the adaptive algorithms can exacerbate the ?estimation of the largest mean?
(ELM) bias, which arises because expectation of the maximum is larger than the maximum of the
expectation. This ELM bias will damage advertisers? utilities because of overcharging.
We will assume that the reader is familiar with the concept of incentive compatbility and give only a
brief review. We suppose that there exists a true value vi , which exactly measures how much a click
is worth to advertiser i. The utility per impression of advertiser i in the auction is then ?i (vi ? pi )
if the ad i is displayed where pi is the per-click payment charged by the search engine charges. An
auction mechanism is called incentive compatible if the advertisers maximize their own utility by
truthfully bidding: bi = vi . For auctions that are close but not fully incentive compatible, we also
define player-regret as the utility lost by advertiser i in truthfully bidding vi rather than a bid that
optimizes utility.
4.1
Player-Regret Analysis
We define v = (v1 , ..., vn ) to be the true per-click values of the advertisers. We will for
simplicity assume that the post-learning unbiasedness condition (Definition 3.1) holds for all
our results in this section. We introduce some formal definitions before we begin our analysis. For a fixed vector
of competing bids b?k
, we define the player utility as uk (bk ) ?
maxk0 6=k bk0 ?
bk0 (bk )
Ibk ?bk (bk )?bk0 ?bk0 (bk )?k0 vk ?k ?
?k , where Ibk ?bk (bk )?bk0 ?bk0 (bk )?k0 is a 0/1 func?
bk (bk )
tion indicating whether the impression is allocated to ad k. We define the player-regret, with respect
to a bid vector b, as the player?s optimal gain in utility through false bidding supb E[uk (bk )] ?
E[uk (vk )]. It is important to note that we are hiding uk (bk )?s and ?bk (bk )?s dependency on the competing bids b?k in our notation. Without loss of generality, we consider the utility of player 1. We
fix b?1 and we define k ? ? arg maxk6=1 bk ?k . We divide our analysis into cases, which cover the
different possible settings of v1 and competing bid b?1 .
Theorem 4.1. The following holds for both uniform and adaptive algorithms.
p
Suppose bk? ?k? ? v1 ?1 ? ?( Tn log T ), then, supb1 E[u1 (b1 )] ? E[u1 (v1 )] ? O Tn . Suppose
p
pn
|v1 ?1 ? bk? ?k? | ? O( Tn log T ) , then supb1 E[u1 (b1 )] ? E[u1 (v1 )] ? O
T log T .
Theorem 4.1 shows that when v1 ?1 is not much larger than bk? ?k? , the player-regret is not too large.
The next Theorem shows that when v1 ?1 is much larger than bk? ?k? however, the player-regret can
be large.
pn
Theorem 4.2. Suppose v1 ?1 ? bk? ?k? ? ?
T log T , then, for both uniform and adaptive
algorithms:
n
?b1 , E[u1 (b1 , b?1 )]?E[u1 (v1 , b?1 )] ? max 0, E[b(2) (v1 )b
?(2) (v1 )] ? E[b(2) (b1 )b
?(2) (b1 )] + O
T
We give the proofs of both Theorem 4.1 and 4.2 in Section 5.3 of the appendix.
Both expectations E[b(2) (v1 )b
?(2) (v1 )] and E[b(2) (b1 )b
?(2) (b1 )] can be larger than b2 ?2 because the
E[maxk6=1 bk ?bk (v1 )] ? maxk6=1 bk E[b
?k (v1 )].
Remark 4.1. In the special case of only two advertisers, it must be that (2) = 2 and therefore
E[b(2) (v1 )b
?(2) (v1 )] = b2 ?2 and E[b(2) (v1 )b
?(2) (v1 )] = b2 ?2 . The player-regret is then very small:
supb1 E[u1 (b1 , b2 )] ? E[u1 (v1 , b2 )] ? O Tn .
The incentive can be much larger when there are more than 2 advertisers. Intuitively, this is because the bias E[b(2) (b1 )b
?(2) (b1 )] ? b2 ?2 increases when T2 (b1 ), ..., Tn (b1 ) are low?that is, it increases when the variance of ?bk (b1 )0 s are high. An omniscient advertiser 1, with the belief that
v1 ?1 >> b2 ?2 , can thus increase his/her utility by underbidding to manipulate the learning algorithm to allocate more rounds to advertisers 2, .., n and reduce the variance of ?bk (b1 )0 s. Such a
strategy will give advertiser 1 negative utility in the learning CTR learning stage, but it will yield
positive utility in the longer SP auction stage and thus give an overall increase to the player utility.
6
In the case of uniform learning, the advertiser?s manipulation is limited because the learning algorithm is not significantly affected by the bid.
Corollary
4.1.
Let the competing bid vector b?1 be fixed. Suppose that v1 ?1 ? bk? ?k? ?
pn
log
T
. If uniform learning is used in the first stage, we have that
?
T
r
n
sup E[u1 (b1 , b?1 )] ? E[u1 (v1 , b?1 )] ? O
log T
T
b1
p
Nevertheless, by contrasting this with Tn log T bound with the Tn bound we would get in the two
advertiser case, we see the negative impact of ELM bias on incentive compatibility. The negative
effect is even more pronounced in the case of adaptive learning. Advertiser 1 can increase its own
utility by bidding some b1 smaller than v1 but still large enough to ensure that b1 ?b1 (b1 ) still be
ranked the highest at the end of the learning stage. We explain this intuition with more details in the
following example, which we also simulate in Section 5.
Example 4.1. Suppose we have n advertisers and b2 ?2 = b3 ?3 = ...bn ?n . Suppose that v1 ?1 >>
b2 ?2 and we will show that advertiser 1 has the incentive to underbid.
Let ?k (b1 ) ? b1 ?1 ? bk ?k , then ?k (b1 )?s are the same for all k and ?k (v1 ) >> 0 by previous
supposition. Supposeadvertiser 1 bids b1 < v1 but where ?k (b1 ) >> 0 still. We assume that
Tk (b1 ) = ?
learning.
log T
?k (b1 )2
for all k = 2, ..., n, which must hold for large T by definition of adaptive
From Lemma 5.4 in the appendix, we know that
s
s
log(n ? 1)
log(n ? 1)
(b1 ?1 ? bk ?k )
E[b(2) (b1 )b
?(2) (b1 )] ? b2 ?2 ?
?
Tk
log T
(4.1)
The Eqn. (4.1) is an upper bound but numerical experiments easily show that E[b(2) (b1 )b
?(2) (b1 )] is
in fact on the same order as the RHS of Eqn. (4.1).
pn
From Eqn. (4.1), we derive that, for any b1 such that b1 ?1 ? b2 ?2 ? ?
T log T :
s
!
log(n ? 1)
(v1 ?1 ? b?1 )
E[u1 (b1 , b?1 )] ? E[u1 (v1 , b?1 )] ? O
log T
Thus, we cannot guarantee that the mechanism is approximately truthful. The bound decreases with
T at a very slow logarithmic rate because with adaptive algorithm, a longer learning period T might
not reduce the variances of many of the estimators ?bk ?s.
We would like to at this point briefly compare our results with that of [9], which shows, under an
imperfect information definition of utility, that advertisers have an incentive to overbid so that the
their CTRs can be better learned by the search engine. Our results are not contradictory since we
show that only the leading advertiser have an incentive to underbid.
4.2
Bias Reduction in Estimation of the Largest Mean
The previous analysis shows that the incentive-incompatibility issue in the case of adaptive learning
is caused by the fact that the estimator b(2) ?b(2) = maxk6=1 b2 ?b2 is upward biased. E[b(2) ?b(2) ] is
much larger than b2 ?2 in general even if the individual estimators ?bk ?s are unbiased. We can abstract
out the game theoretic setting and distill a problem known in the statistics literature as ?Estimation
of the Largest Mean? (ELM): given N probabilities {?k }k=1,...,N , find an estimator ?bmax such that
E[b
?max ] = maxk ?k . Unfortunately, as proved by [4] and [3], unbiased estimator for the largest
mean does not exist for many common distributions including the Gaussian, Binomial, and Beta; we
thus survey some methods for reducing the bias.
[3] studies techniques that explicitly estimate and then substract the bias. Their method, though
interesting, is specific to the case of selecting the larger mean among only two distributions. [1]
7
proposes a different approach based on data-splitting. We randomly partition the data in the clickthrough history into two sets S, E and get two estimators ?bSk , ?bE
bSk for selection
k . We then use ?
S
E
E
and output a weighted average ?b
?k + (1 ? ?)b
?k . We cannot use only ?bk for estimating the value
because, without conditioning on a specific selection, it is downwardly biased. We unfortunately
know of no principled way to choose ?. We implement this scheme with ? = 0.5 and show in
simulation studies in Section 5 that it is effective.
5
Simulations
We simulate our two stage framework for various values of T . Figures 1a and 1b show the effect
of sample selection debiasing (see Section 3, 3.2) on the expected revenue where one uses adaptive
learning. (the UCB algorithm 2.1 in our experiment) One can see that selection bias harms the
revenue but the debiasing method described in Section 3.2, even though it holds out half of the click
data, significantly lowers the expected revenue loss, as theoretically shown in Corollary 3.1. We
choose the tuning parameter ? = 1. Figure 1c shows that when there are a large number of poor
quality ads, low regret adaptive algorithms indeed achieve better revenue in much fewer rounds of
learning. Figure 1d show the effect of estimation-of-the-largest-mean (ELM) bias on the utility gain
of the advertiser. We simulate the setting of Example 4.1 and we see that without ELM debiasing,
the advertiser can noticeably increase utility by underbidding. We implement the ELM debiasing
technique described in Section 4.2; it does not completely address the problem since it does not
completely reduce the bias (such a task has been proven impossible), but it does ameliorate the
problem?the increase in utility from underbidding has decreased.
0.1
0.1
Expected Revenue Loss
Expected Revenue Loss
no selection debiasing
with selection debiasing
0.05
0
?0.05
0
5000
10000
Rounds of Exploration
no selection debiasing
with selection debiasing
0.05
0
?0.05
0
15000
5000
10000
Rounds of Exploration
15000
(a) n = 2, ?1 = 0.09, ?2 = 0.1, b1 = 2, b2 = 1 (b) n = 2, ?1 = .3, ?2 = 0.1, b1 = 0.7, b2 = 1
0.1
no ELM debiasing
with ELM debiasing
uniform
adaptive with debiasing
0.4
Player Utility Gain
Expected Revenue Loss
0.5
0.3
0.2
0.05
0
?0.05
0.1
0
0
0.5
1
1.5
2
Rounds of Exploration
?0.1
0.8
2.5
4
x 10
0.9
1
Bid price
1.1
1.2
~ = {0.15, 0.11, 0.1, 0.05, 01},
(c) n = 42, ?1 = .2, ?2 = 0.15, b1 = 0.8, (d) n = 5, ?
~b?1 = {0.9, 1, 2, 1}
b2 = 1. All other bk = 1, ?k = 0.01.
Figure 1: Simulation studies demonstrating effect of sample selection debiasing and ELM debiasing.
The revenue loss in figures a to c is relative and is measured by 1 ? revenue
b2 ?2 ; negative loss indicate
revenue improvement over oracle SP. Figure d shows advertiser 1?s utility gain as a function of
possible bids. The vertical dotted black line denote the advertiser?s true value at v = 1. Utility gain
utility(b)
is relative and defined as utility(v)
? 1; higher utility gain implies that advertiser 1 can benefit more
from strategic bidding. The expected value is computed over 500 simulated trials.
8
References
[1] K. Alam. A two-sample estimate of the largest mean. Annals of the Institute of Statistical
Mathematics, 19(1):271?283, 1967.
[2] M. Babaioff, R.D. Kleinberg, and A. Slivkins. Truthful mechanisms with implicit payment
computation. arXiv preprint arXiv:1004.3630, 2010.
[3] S. Blumenthal and A. Cohen. Estimation of the larger of two normal means. Journal of the
American Statistical Association, pages 861?876, 1968.
[4] Bhaeiyal Ishwaei D, D. Shabma, and K. Krishnamoorthy. Non-existence of unbiased estimators of ordered parameters. Statistics: A Journal of Theoretical and Applied Statistics,
16(1):89?95, 1985.
[5] N.R. Devanur and S.M. Kakade. The price of truthfulness for pay-per-click auctions. In
Proceedings of the tenth ACM conference on Electronic commerce, pages 99?106, 2009.
[6] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and the
generalized second price auction: Selling billions of dollars worth of keywords. Technical
report, National Bureau of Economic Research, 2005.
[7] N. Gatti, A. Lazaric, and F. Trov`o. A truthful learning mechanism for contextual multi-slot
sponsored search auctions with externalities. In Proceedings of the 13th ACM Conference on
Electronic Commerce, pages 605?622. ACM, 2012.
[8] R. Gonen and E. Pavlov. An incentive-compatible multi-armed bandit mechanism. In Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing,
pages 362?363. ACM, 2007.
[9] S.M. Li, M. Mahdian, and R. McAfee. Value of learning in sponsored search auctions. Internet
and Network Economics, pages 294?305, 2010.
[10] Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V Vazirani. Algorithmic game theory.
Cambridge University Press, 2007.
[11] Sandeep Pandey and Christopher Olston. Handling advertisements of unknown quality in
search advertising. Advances in Neural Information Processing Systems, 19:1065, 2007.
[12] A.D. Sarma, S. Gujar, and Y. Narahari. Multi-armed bandit mechanisms for multi-slot sponsored search auctions. arXiv preprint arXiv:1001.1414, 2010.
[13] J. Wortman, Y. Vorobeychik, L. Li, and J. Langford. Maintaining equilibria during exploration
in sponsored search auctions. Internet and Network Economics, pages 119?130, 2007.
9
| 5146 |@word trial:1 exploitation:1 briefly:1 version:1 seems:1 stronger:1 c0:2 unif:7 willing:1 simulation:6 bn:4 reduction:1 liu:2 score:3 selecting:1 omniscient:1 existing:1 current:1 com:2 contextual:1 must:2 realistic:1 numerical:1 partition:1 alam:1 drop:1 sponsored:5 update:1 half:1 fewer:2 record:2 vorobeychik:1 direct:1 beta:1 symposium:1 persistent:1 incorrect:3 edelman:1 introduce:1 theoretically:2 indeed:1 expected:15 behavior:2 multi:7 mahdian:1 decreasing:2 actual:1 armed:5 increasing:1 clicked:1 becomes:2 begin:1 notation:2 hiding:1 maximizes:1 estimating:1 contrasting:1 elm:12 impractical:2 guarantee:8 blumenthal:1 charge:6 tie:2 exactly:1 ostrovsky:1 uk:4 control:2 t1:2 positive:3 before:5 persists:1 approximately:1 tmax:1 might:1 black:1 resembles:1 studied:1 suggests:1 pavlov:1 limited:1 bi:8 commerce:2 practice:1 regret:14 lost:1 implement:2 babaioff:1 procedure:2 empirical:1 yan:2 significantly:3 confidence:2 numbered:1 suggest:1 get:5 cannot:2 close:1 selection:27 influence:1 impossible:1 quick:1 charged:1 straightforward:1 economics:2 duration:1 devanur:1 survey:1 simplicity:3 identifying:1 immediately:1 splitting:1 estimator:25 rule:2 his:2 notion:2 profitable:1 annals:1 tardos:1 suppose:13 user:4 engage:1 bk0:7 us:2 updating:1 maxk0:2 negligibly:1 preprint:2 t12:1 ensures:1 eva:1 trade:2 highest:1 decrease:1 mentioned:1 intuition:2 principled:1 benjamin:1 complexity:1 overestimation:1 tight:1 completely:2 selling:1 bidding:5 easily:1 k0:3 chapter:1 various:1 derivation:1 describe:2 bmax:1 effective:1 whose:1 larger:11 say:3 statistic:3 online:1 propose:4 product:1 qin:1 combining:2 poorly:1 achieve:1 pronounced:1 webpage:1 billion:1 double:1 produce:2 leave:3 tk:20 tim:1 derive:1 overbid:1 measured:2 keywords:1 qt:1 received:2 strong:1 dividing:1 c:1 indicate:2 implies:1 direction:1 correct:1 stochastic:2 exploration:7 noticeably:1 require:1 fix:2 mab:26 extension:2 correction:1 hold:8 ic:2 normal:1 equilibrium:1 algorithmic:2 achieves:1 adopt:1 estimation:13 lose:1 schwarz:1 largest:9 weighted:1 always:3 gaussian:1 rather:2 ck:1 pn:6 incompatibility:1 corollary:6 focus:2 she:1 improvement:2 rank:1 bernoulli:1 vk:2 contrast:1 dollar:1 accept:1 her:2 bandit:5 tao:1 provably:1 compatibility:6 arg:7 overall:1 issue:1 upward:1 among:1 k6:1 proposes:1 special:2 bsk:2 once:1 future:2 t2:2 report:1 simplify:1 randomly:1 b2k:2 composed:1 simultaneously:1 national:1 individual:1 familiar:1 microsoft:4 maintain:1 held:1 accurate:1 kt:1 allocating:1 encourage:1 partial:1 allocates:1 divide:1 theoretical:1 instance:2 industry:1 gatti:1 cover:1 strategic:2 deviation:1 distill:1 uniform:17 wortman:1 too:4 dependency:1 ctrs:3 unbiasedness:5 truthfulness:1 randomized:3 off:2 pool:2 michael:2 concrete:1 ctr:12 squared:1 choose:3 worse:1 american:1 leading:1 li:2 supb:1 de:1 b2:35 satisfy:6 caused:1 explicitly:1 ranking:3 ad:40 vi:4 later:1 tion:1 nisan:1 analyze:10 sup:1 complicated:2 defer:1 accuracy:1 variance:3 tk1:1 yield:2 identify:3 identification:1 accurately:1 critically:1 krishnamoorthy:1 advertising:13 fore:1 worth:2 history:9 explain:1 taoqin:1 definition:16 sixth:1 obvious:1 proof:5 stop:1 gain:6 proved:2 exacerbate:1 knowledge:1 actually:1 higher:1 asia:2 though:3 generality:1 stage:21 implicit:1 until:1 langford:1 sketch:1 eqn:3 christopher:1 quality:7 b3:1 effect:6 concept:2 true:4 unbiased:5 assigned:1 attractive:1 round:10 game:6 during:1 inferior:1 criterion:3 generalized:4 impression:19 theoretic:5 tn:15 auction:22 novel:1 common:2 specialized:1 debiasing:17 winner:1 conditioning:1 cohen:1 association:1 he:1 mellon:1 refer:1 cambridge:1 tuning:2 mathematics:1 access:1 longer:2 money:1 own:2 recent:1 sarma:1 perspective:2 optimizes:1 manipulation:1 certain:1 inequality:2 meta:1 minimum:1 b22:1 somewhat:1 determine:1 maximize:2 advertiser:42 converge:1 truthful:3 period:1 multiple:2 full:1 technical:1 plug:1 divided:1 post:4 manipulate:1 visit:1 impact:3 cmu:1 expectation:5 externality:1 arxiv:4 decreased:1 adhere:1 allocated:2 biased:5 eliminates:1 tend:5 call:1 presence:1 enough:1 mcafee:1 bid:16 affect:1 competing:4 click:26 imperfect:2 idea:1 reduce:3 economic:1 t0:4 whether:1 sandeep:1 allocate:3 utility:25 cause:1 remark:2 unimportant:1 amount:1 exist:2 dotted:1 estimated:1 lazaric:1 per:8 correctly:1 carnegie:1 incentive:16 affected:1 key:1 threshold:5 nevertheless:1 demonstrating:1 tenth:1 v1:30 run:4 ameliorate:1 almost:2 reader:2 throughout:1 electronic:2 vn:1 appendix:5 fee:1 bound:17 internet:3 pay:3 followed:1 display:6 oracle:3 annual:1 roughgarden:1 constraint:2 sharply:1 kleinberg:1 u1:12 simulate:3 min:3 department:1 according:2 poor:1 smaller:6 kakade:1 rev:8 modification:1 lasting:2 intuitively:2 gradually:1 taken:1 ln:2 payment:10 previously:1 mechanism:7 know:4 merit:1 end:4 existence:1 original:1 bureau:1 top:2 binomial:1 ensure:3 maintaining:1 unchanged:1 objective:1 already:1 damage:2 concentration:4 strategy:1 minx:1 win:1 simulated:1 length:1 acquire:1 difficult:2 unfortunately:2 statement:1 noam:1 negative:4 design:2 clickthrough:1 unknown:2 twenty:1 upper:2 vertical:1 displayed:1 incorrectly:1 maxk:5 tmin:13 bk:63 introduced:1 required:1 slivkins:1 engine:15 learned:2 address:1 beyond:1 suggested:1 able:1 usually:1 below:2 substract:1 gonen:1 b21:1 max:4 including:1 belief:1 ranked:1 pause:1 arm:1 scheme:4 improve:1 brief:2 picture:1 disappears:1 naive:2 func:1 review:1 understanding:1 literature:1 relative:3 loss:15 fully:1 heldout:2 interesting:2 allocation:1 proven:1 revenue:39 switched:1 principle:1 pi:2 compatible:4 keeping:1 bias:32 formal:2 allow:1 institute:1 characterizing:2 benefit:2 distributed:1 feedback:1 adaptive:27 simplified:1 vazirani:1 keep:3 confirm:1 harm:5 b1:43 truthfully:2 search:31 pandey:1 sk:1 learn:1 maxk6:6 correlated:1 submit:1 sp:10 main:2 rh:1 fair:2 ibk:2 xu:1 slow:1 candidate:1 advertisement:6 formula:1 theorem:14 specific:3 jensen:1 supposition:1 naively:1 exists:2 olston:1 false:1 budget:2 vijay:1 logarithmic:1 simply:1 ordered:1 acm:5 slot:9 goal:2 formulated:2 towards:1 price:6 hard:1 uniformly:1 reducing:1 lemma:3 contradictory:1 called:1 player:12 ucb:13 underestimation:1 indicating:1 select:7 latter:1 arises:1 adaptiveness:1 handling:1 |
4,583 | 5,147 | Optimization, Learning, and Games with Predictable
Sequences
Alexander Rakhlin
University of Pennsylvania
Karthik Sridharan
University of Pennsylvania
Abstract
We provide several applications of Optimistic Mirror Descent, an online learning
algorithm based on the idea of predictable sequences. First, we recover the Mirror Prox algorithm for offline optimization, prove an extension to H?older-smooth
functions, and apply the results to saddle-point type problems. Next, we prove
that a version of Optimistic Mirror Descent (which has a close relation to the Exponential Weights algorithm) can be used by two strongly-uncoupled players in
a finite zero-sum matrix game to converge to the minimax equilibrium at the rate
of O((log T )?T ). This addresses a question of Daskalakis et al [6]. Further, we
consider a partial information version of the problem. We then apply the results
to convex programming and exhibit a simple algorithm for the approximate Max
Flow problem.
1
Introduction
Recently, no-regret algorithms have received increasing attention in a variety of communities, including theoretical computer science, optimization, and game theory [3, 1]. The wide applicability
of these algorithms is arguably due to the black-box regret guarantees that hold for arbitrary sequences. However, such regret guarantees can be loose if the sequence being encountered is not
?worst-case?. The reduction in ?arbitrariness? of the sequence can arise from the particular structure of the problem at hand, and should be exploited. For instance, in some applications of online
methods, the sequence comes from an additional computation done by the learner, thus being far
from arbitrary.
One way to formally capture the partially benign nature of data is through a notion of predictable
sequences [11]. We exhibit applications of this idea in several domains. First, we show that the
Mirror Prox method [9], designed for optimizing non-smooth structured saddle-point problems, can
be viewed as an instance of the predictable sequence approach. Predictability in this case is due
precisely to smoothness of the inner optimization part and the saddle-point structure of the problem.
We extend the results to H?older-smooth functions, interpolating between the case of well-predictable
gradients and ?unpredictable? gradients.
Second, we address the question raised in [6] about existence of ?simple? algorithms that converge
? ?1 ) when employed in an uncoupled manner by players in a zero-sum finite
at the rate of O(T
matrix game, yet maintain the usual O(T ?1?2 ) rate against arbitrary sequences. We give a positive
answer and exhibit a fully adaptive algorithm that does not require the prior knowledge of whether
the other player is collaborating. Here, the additional predictability comes from the fact that both
players attempt to converge to the minimax value. We also tackle a partial information version of
the problem where the player has only access to the real-valued payoff of the mixed actions played
by the two players on each round rather than the entire vector.
Our third application is to convex programming: optimization of a linear function subject to convex
constraints. This problem often arises in theoretical computer science, and we show that the idea of
1
predictable sequences can be used here too. We provide a simple algorithm for ?-approximate Max
? 3?2 ??), a performance previously obtained
Flow for a graph with d edges with time complexity O(d
through a relatively involved procedure [8].
2
Online Learning with Predictable Gradient Sequences
Let us describe the online convex optimization (OCO) problem and the basic algorithm studied in
[4, 11]. Let F be a convex set of moves of the learner. On round t = 1, . . . , T , the learner makes
a prediction ft ? F and observes a convex function Gt on F. The objective is to keep regret
T
1
G (f ) ? Gt (f ? ) small for any f ? ? F. Let R be a 1-strongly convex function w.r.t. some
T ?t=1 t t
norm ? ? ? on F, and let g0 = arg ming?F R(g). Suppose that at the beginning of every round t, the
learner has access to Mt , a vector computable based on the past observations or side information. In
this paper we study the Optimistic Mirror Descent algorithm, defined by the interleaved sequence
ft = argmin ?t ?f, Mt ? + DR (f, gt?1 ) , gt = argmin ?t ?g, ?Gt (ft )? + DR (g, gt?1 )
f ?F
(1)
g?F
where DR is the Bregman Divergence with respect to R and {?t } is a sequence of step sizes that
can be chosen adaptively based on the sequence observed so far. The method adheres to the OCO
protocol since Mt is available at the beginning of round t, and ?Gt (ft ) becomes available after
the prediction ft is made. The sequence {ft } will be called primary, while {gt } ? secondary. This
method was proposed in [4] for Mt = ?Gt?1 (ft?1 ), and the following lemma is a straightforward
extension of the result in [11] for general Mt :
Lemma 1. Let F be a convex set in a Banach space B. Let R ? B ? R be a 1-strongly convex
function on F with respect to some norm ? ? ?, and let ? ? ?? denote the dual norm. For any fixed
step-size ?, the Optimistic Mirror Descent Algorithm yields, for any f ? ? F,
T
T
t=1
t=1
?
?
? Gt (ft ) ? Gt (f ) ? ? ?ft ? f , ?t ?
T
? ? ?1 R2 + ? ??t ? Mt ?? ?gt ? ft ? ?
t=1
1 T
2
2
? ??gt ? ft ? + ?gt?1 ? ft ? ? (2)
2? t=1
where R ? 0 is such that DR (f , g0 ) ? R2 and ?t = ?Gt (ft ).
?
When applying the lemma, we will often use the simple fact that
?
1
2
2
??t ? Mt ?? ?gt ? ft ? = inf ? ??t ? Mt ?? +
?gt ? ft ? ? .
2
2?
?>0
(3)
In particular, by setting ? = ?,?we obtain the (unnormalized) regret bound of ? ?1 R2 +
2
2
(??2) ?Tt=1 ??t ? Mt ?? , which is R 2 ?Tt=1 ??t ? Mt ?? by choosing ? optimally. Since this choice
is not known ahead of time, one may either employ the doubling trick, or choose the step size adaptively:
?
?
?1
2
2
t?2
Corollary 2. Consider step size ?t = Rmax min ?? ?t?1
?i=1 ??i ? Mi ?? ? , 1?
i=1 ??i ? Mi ?? +
2
with Rmax
= supf,g?F DR (f, g). Then regret of the Optimistic Mirror Descent algorithm is upper
?
2
bounded by 3.5Rmax ? ?Tt=1 ??t ? Mt ?? + 1? ?T .
These results indicate that tighter regret bounds are possible if one can guess the next gradient ?t
by computing Mt . One such case arises in offline optimization of a smooth function, whereby the
previous gradient turns out to be a good proxy for the next one. More precisely, suppose we aim to
optimize a function G(f ) whose gradients are Lipschitz continuous: ??G(f )??G(g)?? ? H?f ?g?
for some H > 0. In this optimization setting, no guessing of Mt is needed: we may simply query
the oracle for the gradient and set Mt = ?G(gt?1 ). The Optimistic Mirror Descent then becomes
ft = argmin ?t ?f, ?G(gt?1 )? + DR (f, gt?1 ) , gt = argmin ?t ?g, ?G(ft )? + DR (g, gt?1 )
f ?F
g?F
2
which can be recognized as the Mirror Prox method, due to Nemirovski [9]. By smoothness,
??G(ft ) ? Mt ?? = ??G(ft ) ? ?G(gt?1 )?? ? H?ft ? gt?1 ?. Lemma 1 with Eq. (3) and ? = ? = 1?H
immediately yields a bound
T
?
2
? G(ft ) ? G(f ) ? HR ,
t=1
which implies that the average f?T = T1 ?Tt=1 ft satisfies G(f?T ) ? G(f ? ) ? HR2 ?T , a known bound
for Mirror Prox. We now extend this result to arbitrary ?-H?older smooth functions, that is convex
functions G such that ??G(f ) ? ?G(g)?? ? H?f ? g?? for all f, g ? F.
Lemma 3. Let F be a convex set in a Banach space B and let R ? B ? R be a 1-strongly convex
function on F with respect to some norm ? ? ?. Let G be a convex ?-H?older smooth function with
constant H > 0 and ? ? [0, 1]. Then the average f?T = T1 ?Tt=1 ft of the trajectory given by Optimistic
Mirror Descent Algorithm enjoys
G(f?T ) ? inf G(f ) ?
8HR1+?
f ?F
where R ? 0 is such that supf ?F DR (f, g0 ) ? R.
T
1+?
2
This result provides a smooth interpolation between the T ?1?2 rate at ? = 0 (that is, no predictability
of the gradient is possible) and the T ?1 rate when the smoothness structure allows for a dramatic
speed up with a very simple modification of the original Mirror Descent.
3
Structured Optimization
In this section we consider the structured optimization problem
argmin G(f )
f ?F
where G(f ) is of the form G(f ) = supx?X (f, x) with (?, x) convex for every x ? X and (f, ?)
concave for every f ? F. Both F and X are assumed to be convex sets. While G itself need not be
smooth, it has been recognized that the structure can be exploited to improve rates of optimization
if the function is smooth [10]. From the point of view of online learning, we will see that the optimization problem of the saddle point type can be solved by playing two online convex optimization
algorithms against each other (henceforth called Players I and II).
Specifically, assume that Player I produces a sequence f1 , . . . , fT by using a regret-minimization
algorithm, such that
1 T
1 T
1
? (ft , xt ) ? inf ? (f, xt ) ? Rate (x1 , . . . , xT )
T t=1
f ?F T t=1
(4)
and Player II produces x1 , . . . , xT with
1 T
1 T
2
? (? (ft , xt )) ? inf ? (? (ft , x)) ? Rate (f1 , . . . , fT ) .
T t=1
x?X T t=1
(5)
By a standard argument (see e.g. [7]),
inf
f
1 T
?T ) ? sup inf (f, x)
? (f, xt ) ? inf (f, x
T t=1
x f
f
? inf sup (f, x) ? sup ?f?T , x? ? sup
f
where f?T =
sup
x?X
1
T
T
?T =
?t=1 ft and x
1
T
x
x
T
?t=1 xt . By adding (4) and (5), we have
x
1 T
? (ft , x)
T t=1
1 T
1 T
1
2
? (ft , x) ? inf ? (f, xt ) ? Rate (x1 , . . . , xT ) + Rate (f1 , . . . , fT )
T t=1
f ?F T t=1
(6)
which sandwiches the previous sequence of inequalities up to the sum of regret rates and implies
near-optimality of f?T and x
?T .
3
Lemma 4. Suppose both players employ the Optimistic Mirror Descent algorithm with, respectively,
predictable sequences Mt1 and Mt2 , 1-strongly convex functions R1 on F (w.r.t. ? ? ?F ) and R2 on
X (w.r.t. ? ? ?X ), and fixed learning rates ? and ? ? . Let {ft } and {xt } denote the primary sequences
of the players while let {gt }, {yt } denote the secondary. Then for any ?, > 0,
sup ?f?T , x? ? inf sup (f, x)
x?X
?
+
(7)
f ?F x?X
R12 ? T
1 T
1 T
2
2
2
+ ? ??f (ft , xt ) ? Mt1 ?2F ? +
? ?gt ? ft ?F ?
? ??gt ? ft ?F + ?gt?1 ? ft ?F ?
?
2 t=1
2? t=1
2? t=1
T
R22
1 T
1 T
2
2
2
2 2
+
??
(f
,
x
)
?
M
?
+
?y
?
x
?
?
?
?
?
? ??yt ? xt ?X + ?yt?1 ? xt ?X ?
x
t
t
t
t
t X
X
??
2 t=1
2 t=1
2? ? t=1
where R1 and R2 are such that DR1 (f ? , g0 ) ? R12 and DR2 (x? , y0 ) ? R22 , and f?T =
1
T
?t=1 ft .
T
The proof of Lemma 4 is immediate from Lemma 1. We obtain the following corollary:
? F ? X ? R is H?older smooth in the following sense:
Corollary 5. Suppose
?
??f (f, x) ? ?f (g, x)?F ? ? H1 ?f ? g??
F , ??f (f, x) ? ?f (f, y)?F ? ? H2 ?x ? y?X
?
and ??x (f, x) ? ?x (g, x)?X ? ? H4 ?f ? g?F , ??x (f, x) ? ?x (f, y)?X ? ? H3 ?x ? y?X .
?
Let = min{?, ?? , , ? }, H = max{H1 , H2 , H3 , H4 }. Suppose both players employ Optimistic
Mirror Descent with Mt1 = ?f (gt?1 , yt?1 ) and Mt2 = ?x (gt?1 , yt?1 ), where {gt } and {yt }
are the secondary sequences updated by the two algorithms, and with step sizes ? = ? ? = (R12 +
R22 )
1?
2
(2H)?1 ? T2 ?
?1
2
. Then
sup
x?X
4H(R12 + R22 )
?f?T , x? ? inf sup (f, x) ?
1+
f ?F x?X
T 2
1+
2
(8)
As revealed in the proof of this corollary, the negative terms in (7), that come from an upper bound
on regret of Player I, in fact contribute to cancellations with positive terms in regret of Player II, and
vice versa. Such a coupling of the upper bounds on regret of the two players can be seen as leading
to faster rates under the appropriate assumptions, and this idea will be exploited to a great extent in
the proofs of the next section.
4
Zero-sum Game and Uncoupled Dynamics
The notions of a zero-sum matrix game and a minimax equilibrium are arguably the most basic and
important notions of game theory. The tight connection between linear programming and minimax
equilibrium suggests that there might be simple dynamics that can lead the two players of the game
to eventually converge to the equilibrium value. Existence of such simple or natural dynamics is
of interest in behavioral economics, where one asks whether agents can discover static solution
concepts of the game iteratively and without extensive communication.
More formally, let A ? [?1, 1]n?m be a matrix with bounded entries. The two players aim to
find a pair of near-optimal mixed strategies (f?, x
?) ? n ? m such that f?T A?
x is close to the
minimax value minf ? n maxx? m f T Ax, where n is the probability simplex over n actions. Of
course, this is a particular form of the saddle point problem considered in the previous section, with
(f, x) = f T Ax. It is well-known (and follows immediately from (6)) that the players can compute
near-optimal strategies by simply playing no-regret algorithms [7]. More precisely, on round t, the
players I and II ?predict? the mixed strategies ft and xt and observe Axt and ftT A, respectively.
While black-box regret minimization algorithms, such as Exponential Weights, immediately yield
O(T ?1?2 ) convergence rates, Daskalakis et al [6] asked whether faster methods exist. To make the
problem well-posed, it is required that the two players are strongly uncoupled: neither A nor the
number of available actions of the opponent is known to either player, no ?funny bit arithmetic?
is allowed, and memory storage of each player allows only for constant number of payoff vectors.
The authors of [6] exhibited a near-optimal algorithm that, if used by both players, yields a pair of
4
mixed strategies that constitutes an O ? log(m+n)(log TT+(log(m+n))
3?2
)
?-approximate minimax equi-
librium. Furthermore, the method has a regret bound of the same order as Exponential Weights
when faced with an arbitrary sequence. The algorithm in [6] is an application of the excessive gap
technique of Nesterov, and requires careful choreography and interleaving of rounds between the
two non-communicating players. The authors, therefore, asked whether a simple algorithm (e.g. a
modification of Exponential Weights) can in fact achieve the same result. We answer this in the affirmative. While a direct application of Mirror Prox does not yield the result (and also does not provide
strong decoupling), below we show that a modification of Optimistic Mirror Descent achieves the
goal. Furthermore, by choosing the step size adaptively, the same method guarantees the typical
O(T ?1?2 ) regret if not faced with a compliant player, thus ensuring robustness.
In Section 4.1, we analyze the ?first-order information? version of the problem, as described above:
upon playing the respective mixed strategies ft and xt on round t, Player I observes Axt and Player
II observes ftT A. Then, in Section 4.2, we consider an interesting extension to partial information,
whereby the players submit their moves ft , xt but only observe the real value ftT Axt . Recall that in
both cases the matrix A is not known to the players.
4.1
First-Order Information
Consider the following simple algorithm. Initialize f0 = g0? ?
distributions, set = 1?T 2 and proceed as follows:
n
and x0 = y0? ?
m
to be uniform
On round t, Player I performs
Play
Update
ft and observe Axt
?
gt (i) ? gt?1
(i) exp{??t [Axt ]i },
ft+1 (i) ?
gt? (i) exp{??t+1 [Axt ]i }
gt? = (1 ? ) gt + ( ?n) 1n
while simultaneously Player II performs
Play
Update
xt and observe ft? A
?
yt (i) ? yt?1
(i) exp{??t? [ftT A]i },
?
xt+1 (i) ? yt? (i) exp{??t+1
[ftT A]i }
yt? = (1 ? ) yt + ( ?m) 1m
Here, 1n ? Rn is a vector of all ones and both [b]i and b(i) refer to the i-th coordinate of a vector
b. Other than the ?mixing in? of the uniform distribution, the algorithm for both players is simply
the Optimistic Mirror Descent with the (negative) entropy function. In fact, the step of mixing
in the uniform distribution is only needed when some coordinate of gt (resp., yt ) is smaller than
1?(nT 2 ). Furthermore, this step is also not needed if none of the players deviate from the prescribed
method. In such a case, the resulting algorithm is simply the constant step-size Exponential Weights
ft (i) ? exp{?? ?t?2
s=1 [Axs?1 ]i + 2?[Axt?1 ]i }, but with a factor 2 in front of the latest loss vector!
Proposition 6. Let A ? [?1, 1]n?m , F = n , X = m . If both players use above algorithm with,
T
respectively, Mt1 = Axt?1 and Mt2 = ft?1
A, and the adaptive step sizes
?
?
?1
2
2
t?2
1
?t = min ?log(nT ) ? ?t?1
?
?i=1 ?Axi ? Axi?1 ?? ? , 11
i=1 ?Axi ? Axi?1 ?? +
and
?
?
?1
2
2
t?2
1
T
T
T
?t? = min ?log(mT ) ? ?t?1
A?? ? , 11
?
?i=1 ?fiT A ? fi?1
i=1 ?fi A ? fi?1 A?? +
n+log T
?-approximate minimax equilibrium.
respectively, then the pair (f?T , x
?T ) is an O ? log m+log
T
Furthermore, if only one player (say, Player I) follows the above algorithm, her regret against any
sequence x1 , . . . , xT of plays is
?
? log(nT ) ?? T
??
2
??? ?Axt ? Axt?1 ?? + 1?? .
O?
(9)
T
?
? t=1
??
5
)
?
In particular, this implies the worst-case regret of O ? log(nT
? in the general setting of online linear
T
optimization.
We remark that (9) can give intermediate rates for regret in the case that the second player deviates
from the prescribed strategy but produces ?stable? moves. For instance, if the second player employs
a mirror descent algorithm (or Follow the Regularized Leader / Exponential Weights method) with
step size ?, one can typically show stability ?xt ? xt?1 ? = O(?). In this case, (9) yields the rate
log T
O ???
? for the first player. A typical setting of ? ? T ?1?2 for the second player still ensures the
T
O(log T ?T ) regret for the first player.
Let us finish with a technical remark. The reason for the extra step of ?mixing in? the uniform
distribution stems from the goal of having an adaptive and robust method that still attains O(T ?1?2 )
regret if the other player deviates from using the algorithm. If one is only interested in the dynamics
when both players cooperate, this step is not necessary, and in this case the extraneous log T factor
m
? convergence. On the technical side,
disappears from the above bound, leading to the O ? log n+log
T
the need for the extra step is the following. The adaptive step size result of Corollary 2 involves
2
the term Rmax
? supg DR1 (f ? , g) which is potentially infinite for the negative entropy function
R1 . It is possible that the doubling trick or the analysis of Auer et al [2] (who encountered the
same problem for the Exponential Weights algorithm) can remove the extra log T factor while still
preserving the regret minimization property. We also remark that Rmax is small when R1 is instead
the p-norm; hence, the use of this regularizer avoids the extraneous logarithmic in T factor while
still preserving the logarithmic dependence on n and m. However, projection onto the simplex under
the p-norm is not as elegant as the Exponential Weights update.
4.2
Partial Information
We now turn to the partial (or, zero-th order) information model. Recall that the matrix A is not
known to the players, yet we are interested in finding ?-optimal minimax strategies. On each round,
the two players choose mixed strategies ft ? n and xt ? m , respectively, and observe ftT Axt .
Now the question is, how many such observations do we need to get to an ?-optimal minimax
strategy? Can this be done while still ensuring the usual no-regret rate?
The specific setting we consider below requires that on each round t, the two players play four
times, and that these four plays are -close to each other (that is, ?fti ? ftj ?1 ? for i, j ? {1, . . . , 4}).
Interestingly, up to logarithmic factors, the fast rate of the previous section is possible even in this
scenario, but we do require the knowledge of the number of actions of the opposing player (or, an
upper bound on this number). We leave it as an open problem the question of whether one can attain
the 1?T -type rate with only one play per round.
Player II
v1 , . . . , vm?1 : orthonormal basis of m
1
Initialize y1 , x1 = m
1m ; Draw j0 ? Unif([m?1])
At time t = 1 to T
Play xt
Draw jt ? Unif([m ? 1])
Observe :
s+t = ?ft? A(xt + vjt?1 )
s?t = ?ft? A(xt ? vjt?1 )
s?+t = ?ft? A(xt + vjt )
s??t = ?ft? A(xt ? vjt )
Build estimates :
?bt = m (s+t ? s?t ) vj
t?1
2
?bt = m (?
s+t ? s??t ) vjt
2
Update :
?
yt (i) ? yt?1
(i) exp{??t? ?bt (i)}
yt? = (1 ? ) yt + ( ?m)1
? ?
xt+1 (i) ? yt? (i) exp{??t+1
bt (i)}
End
Player I
u1 , . . . , un?1 : orthonormal basis of n
Initialize g1 , f1 = n1 1n ; Draw i0 ? Unif([n ? 1])
At time t = 1 to T
Play ft
Draw it ? Unif([n ? 1])
Observe :
rt+ = (ft + uit?1 )? Axt
rt? = (ft ? uit?1 )? Axt
r?t+ = (ft + uit )? Axt
r?t? = (ft ? uit )? Axt
Build estimates :
a
?t = 2n (rt+ ? rt? ) uit?1
a
?t = 2n (?
rt+ ? r?t? ) uit
Update :
?
gt (i) ? gt?1
(i) exp{??t a
?t (i)}
gt? = (1 ? ) gt + ( ?n)1
ft+1 (i) ? gt? (i) exp{??t+1 a
?t (i)}
End
6
Lemma 7. Let A ? [?1, 1]n?m , F = n , X = m , let be small enough (e.g. exponentially small
in m, n, T ), and let = 1?T 2 . If both players use above algorithms with the adaptive step sizes
?
?t = min ? log(nT )
and
?
?
ai ??
ai?1 ?2? ? ?t?2
ai ??
ai?1 ?2?
?t?1
i=1 ??
i=1 ??
?1
,
?
??
at?1 ??
at?2 ?2?
28m log(mT )
?
?
2
2
?
?
? ?
? ?
?
?
?t?1
?t?2
??
?
i=1 ?bi ?bi?1 ?? ?
i=1 ?bi ?bi?1 ??
?1
= min ? log(mT )
,
?
2
?
?
28n
log(nT
)
?
?
?
b
?
b
?
t?1
t?2
?
?
?
?
?
?
respectively, then the pair (fT , x
?T ) is an
?
?
? ?m log(nT ) log(mT ) + n log(mT ) log(nT )? ?
?
O?
T
?
?
?t?
-approximate minimax equilibrium. Furthermore, if only one player (say, Player I) follows the above
algorithm, her regret against any sequence x1 , . . . , xT of plays is bounded by
?
?
? m log(mT ) log(nT ) + n log(nT ) ?Tt=1 ?xt ? xt?1 ?2 ?
?
O?
T
?
?
We leave it as an open problem to find an algorithm that attains the 1?T -type rate when both players
only observe the value eTi Aej = Ai,j upon drawing pure actions i, j from their respective mixed
strategies ft , xt . We hypothesize a rate better than T ?1?2 is not possible in this scenario.
5
Approximate Smooth Convex Programming
In this section we show how one can use the structured optimization results from Section 3 for approximately solving convex programming problems. Specifically consider the optimization problem
argmax
f ?G
s.t.
c? f
(10)
?i ? [d], Gi (f ) ? 1
where G is a convex set and each Gi is an H-smooth convex function. Let the optimal value of the
above optimization problem be given by F ? > 0, and without loss of generality assume F ? is known
(one typically performs binary search if it is not known). Define the sets F = {f ? f ? G, c? f = F ? }
and X = d . The convex programming problem in (10) can now be reformulated as
d
argmin max Gi (f ) = argmin sup ? x(i)Gi (f ) .
f ?F
f ?F
i?[d]
(11)
x?X i=1
This problem is in the saddle-point form, as studied earlier in the paper. We may think of the first
player as aiming to minimize the above expression over F, while the second player maximizes over
a mixture of constraints with the aim of violating at least one of them.
Lemma 8. Fix , ? > 0. Assume there exists f0 ? G such that c? f0 ? 0 and for every i ? [d],
Gi (f0 ) ? 1 ? . Suppose each Gi is 1-Lipschitz over F. Consider the solution
where ? =
T
?t=1 ft ? F is the average of the trajectory of the procedure in Lemma 4
2
for the optimization problem (11). Let R1 (?) = 12 ???2 and R2 be the entropy function. Further let
B be a known constant such that B ? ?f ? ? g0 ?2 where g0 ? F is some initialization and f ? ? F
2
log d
is the (unknown) solution to the optimization problem. Set ? = argmin ? B? + ?1??H
?, ? ? = ?1 ? H,
=
such that
Mt1
?
?+
and f?T =
f?T = (1 ? ?)f?T + ?f0
1
T
d
?i=1 yt?1 (i)?Gi (gt?1 )
and
Mt2
T>
??H ?1
= (G1 (gt?1 ), . . . , Gd (gt?1 )). Let number of iterations T be
1
B 2 ? log d
inf ?
+
?
? ??H ?1 ?
1 ? ?H
7
We then have that f?T ? G satisfies all d constraints and is ? -approximate, that is
c? f?T ? ?1 ?
?
?F? .
Lemma 8 tells us that using the predictable sequences approach for the two players, one can obtain
an ? -approximate solution to the smooth convex programming problem in number of iterations at
most order 1??. If T1 (reps. T2 ) is the time complexity for single update of the predictable sequence
2
?
algorithm of Player I (resp. Player 2), then time complexity of the overall procedure is O ? T1 +T
?
5.1
Application to Max-Flow
We now apply the above result to the problem of finding Max Flow between a source and a sink
in a network, such that the capacity constraint on each edge is satisfied. For simplicity, consider a
network where each edge has capacity 1 (the method can be easily extended to the case of varying
capacity). Suppose the number of edges d in the network is the same order as number of vertices in
the network. The Max Flow problem can be seen as an instance of a convex (linear) programming
problem, and we apply the proposed algorithm for structured optimization to obtain an approximate
solution.
For the Max Flow problem, the sets G and F are given by sets of linear equalities. Further, if we use
Euclidean norm squared as regularizer for the flow player, then projection step can be performed in
O(d) time using conjugate gradient method. This is because we are simply minimizing Euclidean
norm squared subject to equality constraints which is well conditioned. Hence T1 = O(d). Similarly,
the Exponential Weights update has time complexity O(d) as there are order d constraints, and so
overall time complexity to produce ? approximate solution is given by O(nd), where n is the number
of iterations of the proposed procedure.
Once again, we shall assume that we know the value of the maximum flow F ? (for, otherwise, we
can use binary search to obtain it).
Corollary 9. Applying the procedure for smooth convex programming from Lemma 8 to the Max
Flow problem with f0 = 0 ? G the 0 flow, the time complexity to compute an ?-approximate Max
Flow is bounded by
?
d3?2 log d
O?
?.
?
This time complexity matches the known result from [8], but with a much simpler procedure (gradient descent for the flow player and Exponential Weights for the constraints). It would be interesting
to see whether the techniques presented here can be used to improve the dependence on d to d4?3 or
better while maintaining the 1?? dependence. While the result of [5] has the improved d4?3 dependence, the complexity in terms of ? is much worse.
6
Discussion
We close this paper with a discussion. As we showed, the notion of using extra information about the
sequence is a powerful tool with applications in optimization, convex programming, game theory, to
name a few. All the applications considered in this paper, however, used some notion of smoothness
for constructing the predictable process Mt . An interesting direction of further research is to isolate
more general conditions under which the next gradient is predictable, perhaps even when the functions are not smooth in any sense. For instance one could use techniques from bundle methods to
further restrict the set of possible gradients the function being optimized can have at various points
in the feasible set. This could then be used to solve for the right predictable sequence to use so as
to optimize the bounds. Using this notion of selecting predictable sequences one can hope to derive
adaptive optimization procedures that in practice can provide rapid convergence.
Acknowledgements: We thank Vianney Perchet for insightful discussions. We gratefully acknowledge the support of NSF under grants CAREER DMS-0954737 and CCF-1116928, as well as Dean?s
Research Fund.
8
References
[1] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: A meta-algorithm
and applications. Theory of Computing, 8(1):121?164, 2012.
[2] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64(1):48?75, 2002.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, 2006.
[4] C.-K. Chiang, T. Yang, C.-J. Lee, M. Mahdavi, C.-J. Lu, R. Jin, and S. Zhu. Online optimization with gradual variations. In COLT, 2012.
[5] P. Christiano, J. A Kelner, A. Madry, D. A. Spielman, and S.-H. Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In Proceedings
of the 43rd annual ACM symposium on Theory of computing, pages 273?282. ACM, 2011.
[6] C. Daskalakis, A. Deckelbaum, and A. Kim. Near-optimal no-regret algorithms for zerosum games. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 235?254. SIAM, 2011.
[7] Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29(1):79?103, 1999.
[8] A. Goldberg and S. Rao. Beyond the flow decomposition barrier. Journal of the ACM (JACM),
45(5):783?797, 1998.
[9] A. Nemirovski. Prox-method with rate of convergence O(1/t) for variational inequalities with
lipschitz continuous monotone operators and smooth convex-concave saddle point problems.
SIAM Journal on Optimization, 15(1):229?251, 2004.
[10] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103(1):127?152, 2005.
[11] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In Proceedings of
the 26th Annual Conference on Learning Theory (COLT), 2013.
9
| 5147 |@word version:4 norm:8 nd:1 open:2 unif:4 gradual:1 decomposition:1 asks:1 dramatic:1 reduction:1 selecting:1 interestingly:1 past:1 nt:10 yet:2 benign:1 remove:1 designed:1 hypothesize:1 update:8 fund:1 guess:1 beginning:2 chiang:1 provides:1 equi:1 contribute:1 simpler:1 kelner:1 mathematical:1 h4:2 direct:1 symposium:2 prove:2 behavioral:1 manner:1 x0:1 rapid:1 behavior:1 nor:1 ming:1 unpredictable:1 increasing:1 becomes:2 fti:1 discover:1 bounded:4 maximizes:1 argmin:8 rmax:5 affirmative:1 finding:2 guarantee:3 every:4 concave:2 tackle:1 axt:15 grant:1 arguably:2 positive:2 t1:5 aiming:1 interpolation:1 approximately:1 lugosi:1 black:2 might:1 initialization:1 studied:2 suggests:1 madry:1 nemirovski:2 bi:4 practice:1 regret:25 procedure:7 j0:1 maxx:1 attain:1 projection:2 get:1 onto:1 close:4 operator:1 storage:1 applying:2 optimize:2 dean:1 yt:18 straightforward:1 attention:1 economics:1 latest:1 convex:27 kale:1 simplicity:1 immediately:3 pure:1 communicating:1 orthonormal:2 stability:1 notion:6 coordinate:2 variation:1 updated:1 resp:2 suppose:7 play:9 programming:11 goldberg:1 trick:2 perchet:1 observed:1 ft:60 solved:1 capture:1 worst:2 electrical:1 ensures:1 observes:3 predictable:15 complexity:8 asked:2 nesterov:2 dynamic:4 tight:1 solving:1 upon:2 learner:4 basis:2 sink:1 easily:1 various:1 regularizer:2 fast:1 describe:1 query:1 tell:1 choosing:2 whose:1 posed:1 valued:1 solve:1 say:2 drawing:1 otherwise:1 gi:7 g1:2 think:1 itself:1 online:9 sequence:29 mixing:3 achieve:1 convergence:4 r1:5 produce:4 leave:2 coupling:1 derive:1 h3:2 received:1 eq:1 strong:1 involves:1 come:3 indicate:1 implies:3 direction:1 require:2 f1:4 fix:1 proposition:1 tighter:1 extension:3 hold:1 considered:2 exp:9 great:1 equilibrium:6 predict:1 achieves:1 vice:1 tool:1 minimization:4 eti:1 hope:1 aim:3 rather:1 varying:1 corollary:6 ax:3 attains:2 kim:1 sense:2 i0:1 entire:1 typically:2 bt:4 her:2 relation:1 interested:2 deckelbaum:1 arg:1 dual:1 overall:2 colt:2 extraneous:2 raised:1 initialize:3 once:1 having:1 oco:2 minf:1 constitutes:1 excessive:1 simplex:2 t2:2 employ:4 few:1 simultaneously:1 divergence:1 argmax:1 karthik:1 maintain:1 attempt:1 sandwich:1 opposing:1 n1:1 interest:1 mixture:1 bundle:1 bregman:1 edge:4 partial:5 necessary:1 respective:2 euclidean:2 theoretical:2 instance:5 earlier:1 rao:1 applicability:1 vertex:1 entry:1 uniform:4 too:1 front:1 optimally:1 answer:2 supx:1 gd:1 adaptively:3 confident:1 siam:3 lee:1 vm:1 compliant:1 squared:2 again:1 satisfied:1 cesa:2 choose:2 henceforth:1 dr:8 worse:1 leading:2 mahdavi:1 prox:6 supg:1 multiplicative:2 performed:1 view:1 h1:2 optimistic:11 analyze:1 sup:10 hazan:1 recover:1 ftt:6 minimize:1 who:1 yield:6 none:1 lu:1 trajectory:2 against:4 involved:1 dm:1 proof:3 mi:2 static:1 recall:2 knowledge:2 auer:2 violating:1 follow:1 improved:1 done:2 box:2 strongly:6 generality:1 furthermore:5 hand:1 perhaps:1 name:1 concept:1 ccf:1 hence:2 equality:2 iteratively:1 round:11 game:14 self:1 whereby:2 unnormalized:1 d4:2 tt:7 performs:3 cooperate:1 variational:1 recently:1 fi:3 mt:22 arbitrariness:1 exponentially:1 banach:2 extend:2 refer:1 versa:1 cambridge:1 ai:5 smoothness:4 rd:1 aej:1 similarly:1 cancellation:1 gratefully:1 access:2 f0:6 stable:1 gt:45 showed:1 optimizing:1 inf:12 scenario:2 inequality:2 binary:2 rep:1 meta:1 exploited:3 seen:2 preserving:2 additional:2 gentile:1 employed:1 recognized:2 converge:4 ii:7 arithmetic:1 christiano:1 stem:1 smooth:18 technical:2 faster:3 match:1 laplacian:1 ensuring:2 prediction:3 basic:2 iteration:3 source:1 extra:4 exhibited:1 zerosum:1 subject:2 isolate:1 elegant:1 undirected:1 flow:15 sridharan:2 near:5 yang:1 revealed:1 intermediate:1 enough:1 variety:1 fit:1 finish:1 pennsylvania:2 restrict:1 inner:1 idea:4 economic:1 computable:1 whether:6 expression:1 dr1:2 hr1:1 reformulated:1 proceed:1 action:5 remark:3 schapire:1 exist:1 nsf:1 r12:4 r22:4 per:1 discrete:1 shall:1 four:2 d3:1 neither:1 mt2:4 v1:1 graph:2 monotone:1 sum:5 powerful:1 funny:1 draw:4 bit:1 interleaved:1 bound:10 played:1 encountered:2 oracle:1 annual:3 ahead:1 precisely:3 constraint:7 u1:1 speed:1 argument:1 min:6 optimality:1 prescribed:2 relatively:1 structured:5 conjugate:1 smaller:1 y0:2 modification:3 vjt:5 previously:1 turn:2 loose:1 eventually:1 needed:3 know:1 end:2 available:3 opponent:1 apply:4 observe:8 appropriate:1 robustness:1 vianney:1 existence:2 original:1 maintaining:1 build:2 move:3 objective:1 question:4 g0:7 strategy:10 primary:2 dependence:4 usual:2 rt:5 guessing:1 exhibit:3 gradient:12 thank:1 capacity:3 extent:1 reason:1 minimizing:1 potentially:1 negative:3 unknown:1 twenty:1 bianchi:2 upper:4 observation:2 finite:2 acknowledge:1 descent:14 jin:1 immediate:1 payoff:2 extended:1 communication:1 y1:1 rn:1 arbitrary:5 community:1 pair:4 required:1 extensive:1 connection:1 optimized:1 uncoupled:4 address:2 beyond:1 below:2 ftj:1 max:10 including:1 memory:1 natural:1 regularized:1 hr:1 zhu:1 minimax:10 older:5 improve:2 disappears:1 arora:1 mt1:5 faced:2 prior:1 deviate:3 acknowledgement:1 freund:1 fully:1 loss:2 mixed:7 interesting:3 h2:2 agent:1 dr2:1 proxy:1 playing:4 course:1 enjoys:1 offline:2 side:2 wide:1 barrier:1 axi:4 uit:6 avoids:1 author:2 made:1 adaptive:8 far:2 approximate:11 keep:1 assumed:1 leader:1 daskalakis:3 continuous:2 un:1 search:2 nature:1 robust:1 decoupling:1 career:1 adheres:1 interpolating:1 constructing:1 domain:1 protocol:1 submit:1 vj:1 arise:1 allowed:1 x1:6 predictability:3 exponential:10 third:1 interleaving:1 xt:32 specific:1 jt:1 insightful:1 rakhlin:2 r2:6 exists:1 adding:1 mirror:18 conditioned:1 gap:1 supf:2 entropy:3 logarithmic:3 simply:5 saddle:7 jacm:1 partially:1 doubling:2 collaborating:1 satisfies:2 acm:4 viewed:1 goal:2 careful:1 lipschitz:3 feasible:1 specifically:2 typical:2 infinite:1 lemma:13 called:2 teng:1 secondary:3 player:60 formally:2 support:1 arises:2 alexander:1 spielman:1 |
4,584 | 5,148 | Minimax Optimal Algorithms
for Unconstrained Linear Optimization
H. Brendan McMahan
Google Reasearch
Seattle, WA
[email protected]
Jacob Abernethy?
Computer Science and Engineering
University of Michigan
[email protected]
Abstract
We design and analyze minimax-optimal algorithms for online linear optimization
games where the player?s choice is unconstrained. The player strives to minimize
regret, the difference between his loss and the loss of a post-hoc benchmark strategy. While the standard benchmark is the loss of the best strategy chosen from a
bounded comparator set, we consider a very broad range of benchmark functions.
The problem is cast as a sequential multi-stage zero-sum game, and we give a
thorough analysis of the minimax behavior of the game, providing characterizations for the value of the game, as well as both the player?s and the adversary?s
optimal strategy. We show how these objects can be computed efficiently under
certain circumstances, and by selecting an appropriate benchmark, we construct a
novel hedging strategy for an unconstrained betting game.
1
Introduction
Minimax analysis has recently been shown to be a powerful tool for the construction of online
learning algorithms [Rakhlin et al., 2012]. Generally, these results use bounds on the value of
the game (often based on the sequential Rademacher complexity) in order to construct efficient
algorithms. In this work, we show that when the learner is unconstrained, it is often possible to
efficiently compute an exact minimax strategy for both the player and nature. Moreover, with our
tools we can analyze a much broader range of problems than have been previously considered.
We consider a game where on each round t = 1, . . . , T , first the learner selects xt 2 Rn , and then
an adversary chooses gt 2 G ? Rn , and the learner suffers loss gt ? xt . The goal of the learner is to
minimize regret, that is, loss in excess of that achieved by a post-hoc benchmark strategy. We define
Regret = Loss
(Benchmark Loss) =
T
X
t=1
g t ? xt
L(g1 , . . . , gT )
(1)
as the regret with respect to benchmark performance L (the L intended will be clear from context).
The standard definition of regret arises from the choice
L(g1 , . . . , gT ) = inf g1:T ? x = infn g1:T ? x + I(x 2 X ),
x2X
x2R
(2)
where I(condition) is the indicator function: it returns 0 when condition holds, and returns
1 otherwise. The above choice of L represents P
the loss of the best fixed point x in the bounded
t
convex set X . Throughout we shall write g1:t = s=1 gs for a sum of scalars or vectors. When L
depends only on the sum G ? g1:T we write L(G).
?
Work performed while the author was in the CIS Department at the University of Pennsylvania and funded
by a Simons Postdoctoral Fellowship
1
In the present work we shall consider a broad notion of regret in which, for example, L is defined not
in terms of a ?best in hindsight? comparator but instead in terms of a ?penalized best in hindsight?
objective. Let be some penalty function, and consider
L(G) = min G ? x +
x
(3)
(x).
This is a direct generalization of the usual comparator notion which takes
(x) = I(x 2 X ).
We view this interaction as a sequential zero-sum game played over T rounds, where the player
strives to minimize Eq. (1), and the adversary attempts to maximize it. We study the value of this
game, defined as
!
T
X
T
V ? inf n sup . . . inf n sup
gt ? xt L(g1 , . . . , gT ) .
(4)
x1 2R g1 2G
xT 2R gT 2G
t=1
With this in mind, we can describe the primary contributions of the present paper:
1. We provide a characterization of the value of the game Eq. (4) in terms of the supremum
over the expected value of a function of a martingale difference sequence. This will be
made more explicit in Section 2.
2. We provide a method for computing the player?s minimax optimal (deterministic) strategy
in terms of a ?discrete derivative.? Similarly, we show how to describe the adversary?s
optimal randomized strategy in terms of martingale differences.
3. For ?coordinate-decomposable? games we give a natural and efficiently computable description of the value of the game and the player?s optimal strategy.
4. In Section 3, we consider several benchmark functions L, defined in Eq. (3) via a penalty
function , which lead to interesting and surprising optimal algorithms; we also exactly
compute the values of these games. Figure 1 summarizes these applications. In particular,
we show that constant-step-size gradient descent is minimax optimal for a quadratic , and
an exponential L leads to a bounded-loss hedging algorithm that can still yield exponential
reward on ?easy? sequences.
Applications The primary contributions of this paper are to the theory. Nevertheless, it is worth
pausing to emphasize that the framework of ?unconstrained online optimization? is a fundamental
template for (and strongly motivated by) several online learning settings, and the results we develop
are applicable to a wide range of commonly studied algorithmic problems. The classic algorithm
for linear pattern recognition, the Perceptron, can be seen as an algorithm for unconstrained linear
optimization. Methods for training a linear SVM or a logistic regression model, such as stochastic
gradient descent or the Pegasos algorithm [Shalev-Shwartz et al., 2011], are unconstrained optimization algorithms. Finally, there has been recent work in the pricing of options and other financial
derivatives [DeMarzo et al., 2006, Abernethy et al., 2012] that can be described exactly in terms of
a repeated game which fits nicely into our framework.
We also wish to emphasize that the algorithm of Section 3.2 is both practical and easily implementable: for a multi-dimensional problem one needs to only track the sum of gradients for each
coordinate (similar to Dual Averaging), and compute Eq. (12) for each coordinate to derive the
appropriate strategy. The algorithm provides us with a tool for making potentially unconstrained
bets/investments, but as we discuss it also leads to interesting regret bounds.
Related Work Regret-based analysis has received extensive attention in recent years; see ShalevShwartz [2012] and Cesa-Bianchi and Lugosi [2006] for an introduction. The analysis of alternative
notions of regret is also not new. Vovk [2001] gives bounds relative to benchmarks similar to Eq. (3),
though for different problems and not in the minimax setting. In the expert setting, there has been
much work on tracking a shifting sequence of experts rather than the single best expert; see Koolen
et al. [2012] and references therein. Zinkevich [2003] considers drifting comparators in an online
convex optimization framework. This notion can be expressed by an appropriate L(g1 , . . . , gT ), but
now the order of the gradients matters. Merhav et al. [2006] and Dekel et al. [2012] consider the
stronger notion of policy regret in the online experts and bandit settings, respectively. Stoltz [2011]
also considers some alternative notions of regret. For investing scenarios, Agarwal et al. [2006]
2
setting
soft feasible set
standard regret
bounded-loss betting
L(G)
(x)
G2
2
|G|
2
p
exp(G/ T )
x
2
I(|x| ? 1)
p
p
p
T x log(
T x) + T x
minimax value
update
T
2
xt+1 =
!
!
q
p
2
T
?
e
1
g1:t
Eq. (14)
Eq. (12)
Figure 1: Summary of specific online linear games considered in Section 3. Results are stated for
the one-dimensional problem where gt 2 [ 1, 1]; Corollary 5 gives an extension to n dimensions.
The benchmark L is given as a function of G = g1:T . The standard notion of regret corresponds
to the L(G) = minx2[ 1,1] g1:t ? x = |G|. The benchmark functions can alternatively be derived
from a suitable penalty on comparator points x, so L(G) = minx Gx + (x).
and Hazan and Kale [2009] consider regret with respect to the best constant-rebalanced portfolio.
Our algorithm in Section 3.2 applies to similar problems, but does not require a ?no junk bonds?
assumption, and is in fact minimax optimal for a natural benchmark.
Existing
algorithms do offer bounds for unconstrained problems, generally of the form kx? k/? +
P
? t gt xt . However, such bounds can only guarantee no-regret when an upper bound R on kx? k is
known in advance and used to tune the parameter ?. If one knows such a R, however, the problem
is no longer truly unconstrained. The only algorithms we know that avoid this problem are those of
Streeter and McMahan [2012], and the minimax-optimal
algorithm we introduce in Sec 3.2; these
p
algorithms guarantee guarantee Regret ? O R T log((1 + R)T ) for any R > 0.
The field has seen a number of minimax approaches to online learning. Abernethy and Warmuth
[2010] and Abernethy et al. [2008b] give the optimal behavior for several zero-sum games against
a budgeted adversary. Section 3.3 studies the online linear game of Abernethy et al. [2008a] under
different assumptions, and we adapt some techniques from Abernethy et al. [2009, 2012]; the latter
work also involves analyzing an unconstrained player. Rakhlin et al. [2012] utilizes powerful tools
for non-constructive analysis of online learning as a technique to design algorithms; our work differs
in that we focus on cases where the exact minimax strategy can be computed.
Notions of Regret The standard notion of regret corresponds to a hard penalty (x) = I(x 2
X ). Such a definition makes sense when the player by definition must select a strategy from some
bounded set, for example a probability from the n-dimensional simplex, or a distribution on paths
in a graph. However, in contexts such as machine learning where any x 2 Rn corresponds to a valid
model, such a hard constraint is difficult to justify; while any x 2 Rn is technically feasible, in order
to prove regret bounds we compare to a much more restrictive set. As an alternative, in Sections 3.1
and 3.2 we propose soft penalty functions that encode the belief that points near the origin are more
likely to be optimal (we can always re-center the problem to match our beliefs in this regard), but do
not rule out any x 2 Rn a priori.
Thus, one of our contributions is showing that interesting results can be obtained by
Pchoosing L
differently than in Eq. (2). The player cannot do well in terms of the absolute loss t gt ? xt for
all sequences g1 , . . . , gT , but she can do better on some sequences at the expense of doing worse on
others. The benchmark L makes this notion precise: sequences for which L(g1 , . . . , gT ) is large and
negative are those on which the player desires good performance, at the expense of allowing more
loss (in absolute terms) on sequences where L(g1 , . . . , gT ) is large and positive. The value of the
game V T tells us to what extent any online algorithm can hope to match the benchmark L.
2
General Unconstrained Linear Optimization
In this section we develop general results on the unconstrained linear optimization problem. We start
by analyzing (4) in greater detail, and give tools for computing the regret value V T in such games.
We show that in certain cases the computation of the minimax value can be greatly simplified.
Throughout we will assume that the function L is concave in each of its arguments (thought not
necessarily jointly concave) and bounded on G T . We also include the following assumptions on the
3
set G. First, we assume that either G is a polytope or, more generally, that ConvexHull(G) is a fullrank polytope in Rn . This is not strictly necessary but is convenient for the analysis; any bounded
convex set in Rn can be approximated to arbitrary precision with a polytope. We also make the
necessary assumption that the ConvexHull(G) contains the origin in its interior. We let G 0 be the set
of ?corners? of G, that is G 0 = {g 1 , . . . , g m } and hence ConvexHull(G) = ConvexHull(G 0 ).
We are also concerned with the conditional value of the game, Vt , given x1 , . . . xt and g1 , . . . gt have
already been played. That is, the Regret when we fix the plays on the first t rounds, and then assume
minimax optimal play for rounds
Pt t + 1 through T . However, following the approach of Rakhlin et al.
[2012], we omit the terms s=1 xs ? gs from Eq. (4). We can view this as cost that the learner has
already payed, and neither that cost nor the specific previous plays of the learner impact the value of
the remaining terms in Eq. (1). Thus, we define
!
T
X
Vt (g1 , . . . , gt ) = inf n sup . . . inf n sup
gs ? xs L(g1 , . . . , gT ) . (5)
xt+1 2R gt+1 2G
xT 2R gT 2G
s=t+1
Note the conditional value of the game before anything has been played, V0 (), is exactly V T .
The martingale characterization of the game The fundamental tool used in the rest of the paper
is the following characterization of the conditional value of the game:
Theorem 1. For every t and every sequence g1 , . . . , gt 2 G, we can write the conditional value of
the game as
Vt (g1 , . . . , gt ) =
max
E[Vt+1 (g1 , . . . , gt , G)],
0
G2 (G ),E[G]=0
where (G 0 ) is the set of random variables on G 0 . Moreover, for all t the function Vt is convex in
each of its coordinates and bounded.
All proofs omitted from the body of the paper can be found in the appendix or the extended version
of this paper.
Let MT (G) be the set of T -length martingale difference sequences on G 0 , that is the set of
all sequences of random variables (G1 , . . . , GT ), with Gt taking values in G 0 , which satisfy
E[Gt |G1 , . . . , Gt 1 ] = 0 for all t = 1, . . . , T . Then, we immediately have the following:
Corollary 2. We can write
VT =
max
(G1 ,...,GT )2MT (G 0 )
E[ L(G1 , . . . , GT )],
with the analogous expression holding for the conditional value of the game.
Characterization of optimal strategies The result above gives a nice expression for the value
of the game V T but unfortunately it does not lead directly to a strategy for the player. We now
dig a bit deeper and produce a characterization of the optimal player behavior. This is achieved by
analyzing a simple one-round zero-sum game. As before, we assume G is a bounded subset of Rn
whose convex hull is a polytope whose interior contains the the origin 0. Assume we are given some
convex function f defined and bounded on all of ConvexHull(G). We consider the following:
V = infn sup x ? g + f (g).
x2R g2G
(6)
Theorem 3. There exists a set of n + 1 distinct points {g 1 , . . . , g n+1 } ? G whose convex hull is of
Pn+1
Pn+1
full rank, and a distribution ?
~ 2 n+1 satisfying i=1 ?i g i = 0, such that V = i=1 ?i f (g i ).
Moreover, an optimal choice for the infimum in (6) is the gradient of the unique linear interpolation
of the pairs {(g 1 , f (g 1 )), . . . , (g n+1 , f (g n+1 ))}.
The theorem makes a useful point about determining the player?s optimal strategy for games of this
form. If the player can determine a full-rank set of ?best responses? {g 1 , . . . , g n+1 } to his optimal
x? , each of which should be a corner of the polytope G, then we know that x? must be a ?discrete
gradient? of the function f around 0. That is, if the size of G is small relative to the curvature of
f , then an approximation to rf (0) is the linear interpolation of f at a set of points around 0.
An optimal x? will be exactly this interpolation.
4
This result also tells us how to analyze the general T -round game. We can express (5), the conditional value of the game Vt 1 , in recursive form as
Vt
1 (g1 , . . . , gt 1 )
= inf n sup gt ? xt + Vt (g1 , . . . , gt
xt 2R gt 2G
1 , gt ).
(7)
Hence by setting f (gt ) = Vt (g1 , . . . , gt 1 , gt ), noting that the latter is convex in gt by Theorem 1,
we see we have an immediate use of Theorem 3.
3
Minimax Optimal Algorithms for Coordinate-Decomposable Games
In this section,
Pnwe consider games where G consists of axis-aligned constraints, and L decomposes
so L(g) =
i=1 Li (gi ). In order to solve such games, it is generally sufficient to consider n
independent one-dimensional problems. We study such games first:
Theorem 4. Consider the one-dimensional unconstrained game where the player selects xt 2 R
and the adversary chooses gt 2 G ?= [ 1, 1], and L is? concave in each of its arguments and bounded
on G T . Then, V T = Egt ?{ 1,1}
L(g1 , . . . , gT ) where the expectation is over each gt chosen
independently and uniformly from { 1, 1} (that is, the gt are Rademacher random variables). Further, the conditional value of the game is
?
?
Vt (g1 , . . . , gt ) =
E
L(g1 , . . . , gt , gt+1 , . . . gT ) .
(8)
gt+1 ,...,gT ?{ 1,1}
The proof is immediate from Corollary 2, since the only possible martingale that both plays from
the corners of G and has expectation 0 on each round is the sequence of independent Rademacher
random variables.1 Given Theorem 4, and the fact that the functions L of interest will generally
depend only on g1:T , it will be useful to define BT to be the distribution of g1:T when each gt is
drawn independently and uniformly from { 1, 1}.
Theorem 4 can immediately be extended to coordinate-decomposable games as follows:
Corollary 5. Consider the game where the player chooses xt 2 Rn , the adversary chooses gt 2
PT
Pn
T
[ 1, 1]n , and the payoff is t=1 gt ? xt
and
i=1 L(g1:T,i ) for concave L. Then the value V
the conditional value Vt (?) can be written as
V
T
=n E
G?BT
?
L(G)
?
and Vt (g1 , . . . , gt ) =
n
X
i=1
E
Gi ?BT
t
?
?
L(g1:t,i + Gi ) .
The proof follows by noting the constraints on both players? strategies and the value of the game
fully decompose on a per-coordinate basis.
A recipe for minimax optimal algorithms in one dimension Since Eq. (5) gives the minimax
value of the game if both players play optimally from round t + 1 forward, a minimax strategy for
the learner on round t + 1 must be xt+1 = arg minx2R maxg2{ 1,1} g ? x + Vt+1 (g1 , . . . , gt , g).
Now, we can apply Theorem 3, and note that unique strategy for the adversary is to play g = 1
or g = 1 with equal probability. Thus, the player strategy is just the interpolation of the points
( 1, f ( 1)) and (1, f (1)), where we take f = Vt+1 , giving us
1
Vt+1 (g1 , . . . , gt , 1) Vt+1 (g1 , . . . , gt , +1) .
(9)
2
Thus, if we can derive a closed form for Vt (g1 , . . . , gt ), we will have an efficient minimax-optimal
algorithm. Note that for any function L,
T ? ?
1 X T
E [L(G)] = T
L(2i T ),
(10)
G?BT
2 i=0 i
xt+1 =
since 2 T Ti is the binomial probability of getting exactly i gradients of +1 over T rounds, which
implies T i gradients of 1, so G = i (T i) = 2i T . Using Theorem 4, and Eqs (9) and (10), in
1
However, is easy to extend this to the case where G = [a, b], which leads to different random variables.
5
the following sections we exactly compute the game values and unique minimax optimal strategies
for a variety of interesting coordinate-decomposable games. Even when such exact computations
are not possible, any coordinate-decomposable game where L depends only on G = g1:T can be
solved numerically in polynomial time. If ? = T t, the number of rounds remaining, then we can
compute Vt exactly by using the appropriate binomial probabilities (following Eq. (8) and Eq. (10)),
requiring only a sum over O(? ) values. If ? is large enough, then using an approximation to the
binomial (e.g., the Gaussian approximation) may be sufficient.
We can also immediately provide a characterization of the potentially optimal player strategies in
terms of the subgradients of L. For simplicity, we write @L(g) instead of @( L(g)).
Theorem 6. Let G = [a, b], with a < 0 < b, and L : R ! R is bounded and concave. Then, on
every round, the unique minimax optimal x?t satisfies x?t 2 L where L = [w2R @L(w).
Proof. Following Theorem 3, we know the minimax xt+1 interpolates (a, f (a)) and (b, f (b)),
where we take f (g) = Vt+1 (g1 , . . . , gt , g). In one dimension, this implies xt+1 2 @f (g) for some
g 2 G. It remains to show @f (g) ? L. From Theorem 1 we have f (g) = E[ L(g1:t + g + B)],
where the E is with respect to mean-zero random variable B ? B? , ? = T t. For each possible
value b that B can take on, @g L(g1:t +g +bi ) ? L by definition, so @f (g) is a convex combination
of these sets (e.g., Rockafellar [1997, Thm. 23.8]). The result follows as L is convex.
Note that for standard regret, L(g) = inf x2X gx, we have @L(g) ? X , indicating that (in 1 dimension at least), the player never needs to play outside the comparator set X . We will see additional
consequences of this theorem in the following sections.
3.1
Constant step-size gradient descent can be minimax optimal
Suppose we use a ?soft? feasible set for the benchmark via a quadratic penalty,
L(G) = min Gx +
x
2
x2 =
1 2
G ,
2
(11)
for a constant > 0. Does a no-regret algorithm against this comparison class exist? Unfortunately,
the general answer is no, as shown in the next theorem. Recalling gt 2 [ 1, 1],
h
i
Theorem 7. The value of this game is V T = EG?BT 21 G2 = 2T .
Thus, for a fixed , we cannot have a no regret algorithm with respect to this L. But this does not
mean the minimax algorithm will be uninteresting. To derive the minimax optimal algorithm, we
compute conditional values (using similar techniques to Theorem 7),
h 1
i
1
Vt (g1 , . . . , gt ) =
E
(g1:t + G)2 =
(g1:t )2 + (T t) ,
G?BT t 2
2
and so following Eq. (9) the minimax-optimal algorithm must use
xt+1 =
1
4
(g1:t
1)2 + (T
t
1)
((g1:t + 1)2 + (T
t
1)) =
1
( 4g1:t ) =
4
1
g1:t
Thus, a minimax-optimal algorithm is simply constant-learning-rate gradient descent with learning
rate 1 . Note that for a fixed , this is the optimal algorithm independent of T ; this is atypical, as
usually the minimax optimal algorithm depends on the horizon (as we will see in the next two cases).
Note that the set L = R (from Theorem 6), and indeed the player could eventually play an arbitrary
point in R (given large enough T ).
3.2
Non-stochastic betting with exponential upside and bounded worst-case loss
A major advantage of the regret minimization framework is that the guarantees we can achieve are
typically robust to arbitrary input sequences. But on the downside the model is very pessimistic: we
measure performance in the worst case. One might aim to perform not too badly in the worst case
yet extremely well under certain conditions.
6
We now show how the results in the present paper can lead to a very optimistic guarantee, particularly in the case of a sequential betting game. On each round t, the world offers the player a betting
opportunity on a coin toss, i.e. a binary outcome gt 2 { 1, 1}. The player may take either side of
the bet, and selects a wager amount xt , where xt > 0 implies a bet on tails (gt = 1) and xt < 0 a
bet on heads (gt = 1). The world then announces whether the bet was won or lost, revealing gt . The
player?s wealth changes (additively) by gt xt (that is, the player strives to minimize loss gt xt ). We
assume that the player begins with some initial capital ? > 0, and at any time period the wager |xt |
Pt 1
must not exceed ?
s=1 gs xs , the initial capital plus the money earned thus far.
PT
With the benefit of hindsight, the gambler can see G = t=1 gt , the total number of heads minus the
total number of heads. Let us imagine that the number of heads significantly exceeded the number of
tails, or vice versa; that is, |G| was much larger than 0. Without loss of generality let us assume that
G is positive. Let us imagine that the gambler, with the benefit of hindsight, considers what could
have happened had he always bet a constant fraction of his wealth on heads. A simple exercise
shows that his wealth would become
T
Y
T +G
T G
(1 + gt ) = (1 + ) 2 (1
) 2 .
This is optimized at
t=1
G
T , which
? gives
a?simple expression
?? in terms of KL-divergence for the
1+G/T
1
maximum wealth in hindsight, exp T ? KL
| | 2 , and the former is well-approximated
2
=
by exp(O(G2 /T )) when G is not too large relative to T . In other words, with knowledge of the final
G, a na??ve betting strategy could have earned the gambler exponentially large winnings starting with
constant capital. Note that this is essentially a Kelly betting scheme [Kelly Jr, 1956], expressed in
terms of G. We ask: does there exist an adaptive betting strategy that can compete with this hindsight
benchmark, even if the gt are chosen fully adversarially?
Indeed we show we can get reasonably
close. Our aim will be to compete with a slightly weaker
p
benchmark L(G) = exp(|G|/ T ). We present a solution for the one-sided game, without the
absolute value, so the player only aims for exponential wealth growth for large positive G. It is not
hard to develop a two-sided algorithm as a result, which we soon discuss.
p
Theorem 8. Consider the game where G = [ 1, 1] with benchmark L(G) = exp(G/ T ). Then
?
?T p
V T = cosh p1T
? e
with the bound tight
Let? ? =? T t and Gt = g1:t , then the conditional value of the
? as T !?1.
?
Gt
1
game is Vt (Gt ) = cosh pT exp p
and the player?s minimax optimal strategy is:
T
?
?
?
?? 1
Gt
xt+1 = exp p
sinh p1T cosh p1T
(12)
T
Recall that the value of the game can be thought
of as the largest possible difference
p
P between the
payoff of the benchmark function exp(G/ T ) and the winnings of the player
gt xt , when the
player uses an optimal betting strategy. That the value of the game here is of constant order is
critical, since it says that we can always achieve a payoff that is exponential in pGT at a cost of no
p
more than e = O(1). Notice we have said nothing thus far regarding the nature of our betting
strategy; in particular we have not proved that the strategy satisfies the required condition that the
gambler cannot bet more than ? plus the earnings thus far. We now give a general result showing
that this condition can be satisfied:
Theorem 9. Consider a one dimensional game with G = [ 1, 1] with benchmark function L nonPt
T
positive on G T . Then for the optimal betting strategy we have that |xt | ?
s=1 gs xs + V , and
P
t
T
further V
s=1 gs xs for any t and any sequence g1 , . . . , gt .
In other words, the player?s cumulative loss at any time is always bounded from below by V T . This
implies that the starting capital ? required p
to ?replicate? the payoff function is exactly the value2 of
T
the game V . Indeed, to replicate exp(G/ T ) we would require no more than ? = $1.65.
2
This idea has a long history in finance and was a key tool in Abernethy et al. [2012], DeMarzo et al. [2006],
and other works.
7
It is worth noting an alternative characterization of the benchmark function L used here. For a 0,
minx2R (Gx ax log( ax) + ax) = exp G
(x) = ax log(ax) +
a . Thus, if we take
ax + I(x ? 0), we have minx2R g1:T x + (x) = exp G
.
Since
this
algorithm needs large
a
Reward when G is large and positive, we might expect that the minimax optimal algorithm only
plays xt ? 0. Another intuition for this is that the algorithm should not need to play any point x to
which assigns an infinite penalty. This intuition can be confirmed immediately via Theorem 6.
We now sketch how to derive an
palgorithm for the ?two-sided? game. To do this, we let LC (G) ?
L(G) + L( G) ? exp(|G|/ T ). We can construct a minimax optimal algorithm for LC (G) by
running two copies of the one-sided minimax algorithm simultaneously, switching the signs of the
gradients and plays of the second copy. We formalize this in Appendix B.
This same benchmark and algorithm can be used in the setting introduced by Streeter and McMahan [2012].
In that work, the goal was to prove bounds on standard regret like Regret ?
p
O(R T log ((1 + R)T )) simultaneously for any comparator x? with |x? | = R. Stating their Theorem 1 in terms of losses, this traditional regret bound is achieved by any algorithm that guarantees
?
?
T
X
|G|
Loss =
gt xt ? exp p
+ O(1).
(13)
T
t=1
The symmetric algorithm (Appendix B) satisfies
?
?
?
?
p
G
G
Loss ? exp p
exp p
+2 e?
T
T
exp
?
|G|
p
T
and so we also achieve a standard regret bound of the form given above.
3.3
?
p
+ 2 e,
Optimal regret against hypercube adversaries
Perhaps the simplest and best studied learning games are those that restrict both the player and
adversary to a norm ball, and use the standard notion of regret. We can derive results for the game
where the adversary has an L1 constraint, the comparator set is also the L1 ball, and the player is
unconstrained. Corollary 5 implies it is sufficient to study the one-dimensional case.
Theorem 10. Consider the game between an adversary who chooses losses gt 2 [ 1, 1], and a
player who chooses xt 2 R. For a given sequence of plays, x1 , g1 , x2 , g2 , . . . , xT , gT , the value to
PT
the adversary is t=1 gt xt |g1:T |. Then, when T is even with T = 2M , the minimax value of this
game is given by
r
2M T !
2T
?
.
VT = 2 T
(T M )!M !
?
q
Further, as T ! 1, VT ! 2T
? . Let B be a random variable drawn from BT t . Then the minimax
optimal strategy for the player given the adversary has played Gt = g1:t is given by
xt+1 = Pr(B <
Gt )
Pr(B >
Gt ) = 1
2 Pr(B >
Gt ) 2 [ 1, 1].
(14)
p
The fact that the limiting value of this game is 2T /? was previously known, e.g., see a mention
in Abernethy et al. [2009]; however, we believe this explicit form for the optimal player strategy is
new. This strategy can be efficiently computed numerically, e.g, by using the regularized incomplete
beta function for the CDF of the binomial distribution. It also follows from this expression that even
though we allow the player to select xt+1 2 R, the minimax optimal algorithm always selects points
from [ 1, 1], so our result applies to the case where the player is constrained to play from X .
Abernethy et al. [2008a] shows that for the linear game with n
3 where
p both the learner and
adversary select vectors from the unit sphere, the minimax value is exactly T . Interestingly,
in the
p
n
=
1
case
(where
L
and
L
coincide),
the
value
of
the
game
is
lower,
about
0.8
T
rather
than
2
1
p
T . This indicates a fundamental difference in the geometry of the n = 1 space and n
3. We
conjecture the minimax value for the L2 game with n = 2 lies somewhere in between.
8
References
Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In NIPS,
2010.
Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and
minimax lower bounds for online convex games. In COLT, 2008a.
Jacob Abernethy, Manfred K Warmuth, and Joel Yellin. Optimal strategies from random walks.
In Proceedings of The 21st Annual Conference on Learning Theory, pages 437?446. Citeseer,
2008b.
Jacob Abernethy, Alekh Agarwal, Peter Bartlett, and Alexander Rakhlin. A stochastic view of
optimal regret through minimax duality. In COLT, 2009.
Jacob Abernethy, Rafael M. Frongillo, and Andre Wibisono. Minimax option pricing meets blackscholes in the limit. In STOC, 2012.
Amit Agarwal, Elad Hazan, Satyen Kale, and Robert E. Schapire. Algorithms for portfolio management based on the Newton method. In ICML, 2006.
Nicol`o Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University
Press, 2006.
A. de Moivre. The Doctrine of Chances: or, A Method of Calculating the Probabilities of Events in
Play. 1718.
Ofer Dekel, Ambuj Tewari, and Raman Arora. Online bandit learning against an adaptive adversary:
from regret to policy regret. In ICML, 2012.
Peter DeMarzo, Ilan Kremer, and Yishay Mansour. Online trading algorithms and robust option
pricing. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing,
pages 477?486. ACM, 2006.
Persi Diaconis and Sandy Zabell. Closed form summation for classical distributions: Variations on
a theme of de Moivre. Statistical Science, 6(3), 1991.
Elad Hazan and Satyen Kale. On stochastic and worst-case models for investing. In NIPS. 2009.
J. L. Kelly Jr. A new interpretation of information rate. Bell System Technical Journal, 1956.
Wouter Koolen, Dmitry Adamskiy, and Manfred Warmuth. Putting bayes to sleep. In NIPS. 2012.
N. Merhav, E. Ordentlich, G. Seroussi, and M. J. Weinberger. On sequential strategies for loss
functions with memory. IEEE Trans. Inf. Theor., 48(7), September 2006.
Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: From value to
algorithms. In NIPS, 2012.
Ralph T. Rockafellar. Convex Analysis (Princeton Landmarks in Mathematics and Physics). Princeton University Press, 1997.
Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in
Machine Learning, 2012.
Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated
sub-gradient solver for svm. Mathematical Programming, 127(1):3?30, 2011.
Gilles Stoltz. Contributions to the sequential prediction of arbitrary sequences: applications to the
theory of repeated games and empirical studies of the performance of the aggregation of experts.
Habilitation a` diriger des recherches, Universit?e Paris-Sud, 2011.
Matthew Streeter and H. Brendan McMahan. No-regret algorithms for unconstrained online convex
optimization. In NIPS, 2012.
Volodya Vovk. Competitive on-line statistics. International Statistical Review, 69, 2001.
Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
ICML, 2003.
9
| 5148 |@word version:1 polynomial:1 norm:1 stronger:1 replicate:2 dekel:2 additively:1 jacob:6 citeseer:1 mention:1 minus:1 initial:2 contains:2 selecting:1 egt:1 interestingly:1 existing:1 com:1 surprising:1 yet:1 must:5 written:1 update:1 warmuth:4 manfred:3 characterization:8 provides:1 earnings:1 gx:4 mathematical:1 direct:1 become:1 beta:1 symposium:1 prove:2 consists:1 introduce:1 indeed:3 expected:1 behavior:3 nor:1 multi:2 sud:1 solver:1 begin:1 bounded:14 moreover:3 what:2 hindsight:6 guarantee:6 thorough:1 every:3 ti:1 concave:5 growth:1 finance:1 exactly:9 universit:1 unit:1 reasearch:1 omit:1 positive:5 before:2 engineering:1 limit:1 consequence:1 switching:1 analyzing:3 meet:1 path:1 interpolation:4 lugosi:2 might:2 plus:2 therein:1 studied:2 range:3 bi:1 practical:1 unique:4 thirty:1 investment:1 regret:35 recursive:1 differs:1 lost:1 palgorithm:1 empirical:1 bell:1 thought:2 revealing:1 convenient:1 significantly:1 word:2 gabor:1 get:1 pegasos:2 cannot:3 interior:2 close:1 context:2 zinkevich:2 deterministic:1 center:1 kale:3 attention:1 starting:2 independently:2 convex:15 announces:1 decomposable:5 simplicity:1 immediately:4 assigns:1 rule:1 his:4 financial:1 classic:1 notion:11 coordinate:9 variation:1 analogous:1 limiting:1 construction:1 play:14 pt:6 suppose:1 exact:3 imagine:2 yishay:1 us:1 shamir:1 programming:2 origin:3 trend:1 recognition:1 approximated:2 satisfying:1 particularly:1 solved:1 worst:4 earned:2 intuition:2 complexity:1 reward:2 depend:1 tight:1 technically:1 learner:8 basis:1 easily:1 optimistic:1 differently:1 distinct:1 describe:2 tell:2 shalev:3 outside:1 abernethy:14 outcome:1 whose:3 doctrine:1 larger:1 solve:1 elad:2 say:1 relax:1 otherwise:1 satyen:2 statistic:1 gi:3 g1:57 jointly:1 final:1 online:18 hoc:2 sequence:15 advantage:1 propose:1 interaction:1 aligned:1 achieve:3 description:1 getting:1 recipe:1 seattle:1 rademacher:3 produce:1 object:1 derive:5 develop:3 stating:1 andrew:1 seroussi:1 received:1 eq:15 involves:1 implies:5 trading:1 stochastic:4 hull:2 require:2 fix:1 generalization:1 decompose:1 pessimistic:1 summation:1 theor:1 extension:1 strictly:1 hold:1 around:2 considered:2 exp:16 algorithmic:1 matthew:1 major:1 sandy:1 omitted:1 applicable:1 bond:1 largest:1 vice:1 tool:7 cotter:1 hope:1 minimization:1 always:5 gaussian:1 aim:3 rather:2 avoid:1 pn:3 frongillo:1 bet:7 broader:1 corollary:5 encode:1 derived:1 focus:1 ax:6 she:1 rank:2 indicates:1 greatly:1 brendan:2 sense:1 bt:7 typically:1 habilitation:1 bandit:2 selects:4 ralph:1 arg:1 dual:1 colt:2 priori:1 constrained:1 field:1 construct:3 equal:1 nicely:1 never:1 represents:1 broad:2 adversarially:1 comparators:1 icml:3 simplex:1 others:1 diaconis:1 simultaneously:2 divergence:1 ve:1 zabell:1 intended:1 geometry:1 karthik:1 attempt:1 recalling:1 interest:1 wouter:1 joel:1 truly:1 primal:1 wager:2 necessary:2 ohad:1 stoltz:2 incomplete:1 walk:1 re:1 blackscholes:1 soft:3 downside:1 cost:3 subset:1 uninteresting:1 too:2 optimally:1 answer:1 chooses:6 st:1 fundamental:3 randomized:1 international:1 physic:1 infn:2 na:1 cesa:2 satisfied:1 management:1 worse:1 corner:3 expert:5 derivative:2 return:2 li:1 ilan:1 de:3 sec:1 rockafellar:2 matter:1 satisfy:1 depends:3 hedging:2 performed:1 view:3 pnwe:1 closed:2 analyze:3 sup:6 hazan:3 doing:1 start:1 option:3 bayes:1 aggregation:1 moivre:2 shai:2 simon:1 competitive:1 contribution:4 minimize:4 who:2 efficiently:4 yield:1 worth:2 confirmed:1 dig:1 history:1 suffers:1 andre:1 definition:4 infinitesimal:1 against:5 proof:4 junk:1 proved:1 ask:1 persi:1 recall:1 knowledge:1 formalize:1 exceeded:1 response:1 though:2 strongly:1 generality:1 just:1 stage:1 sketch:1 google:2 logistic:1 infimum:1 perhaps:1 pricing:3 believe:1 requiring:1 former:1 hence:2 symmetric:1 recherches:1 eg:1 fullrank:1 round:13 game:65 anything:1 won:1 generalized:1 l1:2 novel:1 recently:1 koolen:2 mt:2 exponentially:1 extend:1 tail:2 he:1 x2r:2 numerically:2 interpretation:1 versa:1 cambridge:1 unconstrained:16 mathematics:1 similarly:1 portfolio:2 funded:1 had:1 rebalanced:1 longer:1 money:1 v0:1 gt:81 alekh:1 curvature:1 recent:2 inf:8 scenario:1 certain:3 binary:1 vt:24 seen:2 greater:1 additional:1 determine:1 maximize:1 period:1 full:2 upside:1 technical:1 match:2 adapt:1 offer:2 long:1 sphere:1 post:2 impact:1 prediction:2 regression:1 circumstance:1 expectation:2 essentially:1 agarwal:3 achieved:3 fellowship:1 x2x:2 wealth:5 rest:1 ascent:1 sridharan:1 near:1 noting:3 exceed:1 easy:2 concerned:1 enough:2 variety:1 fit:1 pennsylvania:1 restrict:1 regarding:1 idea:1 computable:1 gambler:4 whether:1 motivated:1 expression:4 bartlett:2 penalty:7 peter:3 interpolates:1 generally:5 useful:2 clear:1 tewari:2 tune:1 amount:1 cosh:3 simplest:1 schapire:1 exist:2 notice:1 happened:1 sign:1 estimated:1 track:1 per:1 write:5 discrete:2 shall:2 express:1 key:1 putting:1 nevertheless:1 drawn:2 capital:4 budgeted:2 neither:1 graph:1 fraction:1 sum:8 year:1 yellin:1 compete:2 powerful:2 throughout:2 utilizes:1 raman:1 summarizes:1 appendix:3 bit:1 bound:12 sinh:1 played:4 quadratic:2 sleep:1 g:6 badly:1 annual:2 constraint:4 x2:2 nathan:1 argument:2 min:2 extremely:1 subgradients:1 betting:11 conjecture:1 martin:1 department:1 combination:1 ball:2 jr:2 strives:3 slightly:1 making:1 pr:3 demarzo:3 sided:4 previously:2 remains:1 discus:2 eventually:1 singer:1 mind:1 know:4 umich:1 ofer:1 apply:1 appropriate:4 alternative:4 coin:1 weinberger:1 drifting:1 binomial:4 remaining:2 include:1 running:1 opportunity:1 newton:1 pgt:1 calculating:1 somewhere:1 yoram:1 giving:1 restrictive:1 amit:1 hypercube:1 classical:1 objective:1 already:2 strategy:34 primary:2 randomize:1 usual:1 traditional:1 said:1 september:1 gradient:13 minx:1 landmark:1 polytope:5 considers:3 extent:1 length:1 providing:1 difficult:1 unfortunately:2 merhav:2 potentially:2 holding:1 expense:2 stoc:1 robert:1 stated:1 negative:1 design:2 policy:2 perform:1 bianchi:2 upper:1 allowing:1 gilles:1 benchmark:22 implementable:1 descent:4 immediate:2 payoff:4 extended:2 precise:1 head:5 rn:9 jabernet:1 mansour:1 arbitrary:4 thm:1 introduced:1 cast:1 pair:1 kl:2 extensive:1 optimized:1 required:2 paris:1 nip:5 trans:1 adversary:17 usually:1 pattern:1 below:1 eighth:1 ambuj:2 rf:1 max:2 memory:1 belief:2 shifting:1 suitable:1 critical:1 natural:2 event:1 regularized:1 indicator:1 minimax:41 scheme:1 axis:1 arora:1 nice:1 kelly:3 l2:1 review:1 nicol:1 determining:1 relative:3 loss:21 fully:2 expect:1 interesting:4 srebro:1 foundation:1 sufficient:3 penalized:1 summary:1 kremer:1 soon:1 copy:2 side:1 weaker:1 deeper:1 perceptron:1 allow:1 wide:1 template:1 taking:1 absolute:3 benefit:2 regard:1 dimension:4 valid:1 world:2 cumulative:1 ordentlich:1 author:1 made:1 commonly:1 forward:1 simplified:1 adaptive:2 coincide:1 far:3 excess:1 pausing:1 emphasize:2 rafael:1 dmitry:1 supremum:1 adamskiy:1 shwartz:3 alternatively:1 postdoctoral:1 investing:2 streeter:3 decomposes:1 nature:2 reasonably:1 robust:2 necessarily:1 nothing:1 repeated:3 x1:3 body:1 g2g:1 martingale:5 lc:2 precision:1 sub:1 theme:1 explicit:2 wish:1 exponential:5 exercise:1 mcmahan:5 winning:2 atypical:1 shalevshwartz:1 lie:1 theorem:23 xt:37 specific:2 showing:2 rakhlin:6 x:5 svm:2 exists:1 sequential:6 ci:1 kx:2 horizon:1 michigan:1 p1t:3 payed:1 likely:1 simply:1 expressed:2 desire:1 tracking:1 g2:5 scalar:1 volodya:1 applies:2 corresponds:3 satisfies:3 chance:1 acm:2 cdf:1 comparator:7 conditional:10 goal:2 toss:1 feasible:3 hard:3 change:1 infinite:1 uniformly:2 vovk:2 averaging:1 justify:1 total:2 duality:1 w2r:1 player:40 indicating:1 select:3 latter:2 arises:1 alexander:3 wibisono:1 constructive:1 princeton:2 |
4,585 | 5,149 | Online Learning with Costly Features and Labels
Navid Zolghadr
Department of Computing Science
University of Alberta
[email protected]
G?abor Bart?ok
Department of Computer Science
ETH Z?urich
[email protected]
Russell Greiner
Andr?as Gy?orgy
Csaba Szepesv?ari
Department of Computing Science, University of Alberta
{rgreiner,gyorgy,szepesva}@ualberta.ca
Abstract
This paper introduces the online probing problem: In each round, the learner is
able to purchase the values of a subset of feature values. After the learner uses
this information to come up with a prediction for the given round, he then has the
option of paying to see the loss function that he is evaluated against. Either way,
the learner pays for both the errors of his predictions and also whatever he chooses
to observe, including the cost of observing the loss function for the given round
and the cost of the observed features. We consider two variations of this problem,
depending on whether the learner can observe the label for free or not. We provide
algorithms and upper and lower bounds on the regret for both variants. We show
that a positive cost for observing the label significantly increases the regret of the
problem.
1
Introduction
In this paper, we study a variant of online learning, called online probing, which is motivated by
practical problems where there is a cost to observing the features that may help one?s predictions.
Online probing is a class of online learning problems. Just like in standard online learning problems,
the learner?s goal is to produce a good predictor. In each time step t, the learner produces his
prediction based on the values of some feature xt = (xt,1 , . . . , xt,d )> 2 X ? Rd .1 However, unlike
in the standard online learning settings, if the learner wants to use the value of feature i to produce a
prediction, he has to purchase the value at some fixed, a priori known cost, ci 0. Features whose
value is not purchased in a given round remain unobserved by the learner. Once a prediction y?t 2 Y
is produced, it is evaluated against a loss function `t : Y ! R. At the end of a round, the learner
has the option of purchasing the full loss function, again at a fixed prespecified cost cd+1
0 (by
default, the loss function is not revealed to the learner). The learner?s performance is measured by his
regret as he competes against some prespecified set of predictors. Just like the learner, a competing
predictor also needs to purchase the feature values needed in the prediction. If st 2 {0, 1}d+1 is the
indicator vector denoting what the learner purchased in round t (st,i = 1 if the learner purchased
xt,i for 1 ? i ? d, and purchased the label for i = d + 1) and c 2 [0, 1)d+1 denotes the respective
costs, then the regret with respect to a class of prediction functions F ? {f | f : X ! Y} is defined
by
(
)
T
T
X
X
RT =
{`t (?
yt ) + h st , c i}
inf T h s(f ), c1:d i +
`t (f (xt )) ,
t=1
f 2F
t=1
where c1:d 2 R is the vector obtained from c by dropping its last component and for a given function f : Rd ! Y, s(f ) 2 {0, 1}d is an indicator vector whose ith component indicates whether f
d
1
We use > to denote the transpose of vectors. Throughout, all vectors x 2 Rd will denote column vectors.
1
is sensitive to its ith input (in particular, si (f ) = 0 by definition when f (x1 , . . . , xi , . . . , xd ) =
f (x1 , . . . , x0i , . . . , xd ) holds for all (x1 , . . . , xi , . . . , xd ), (x1 , . . . , x0i , . . . , xd ) 2 X ; otherwise
si (f ) = 1). Note that when defining the best competitor in hindsight, we did not include the cost of
observing the loss function. This is because (i) the reference predictors do not need it; and (ii) if we
did include the cost of observing the loss function for the reference predictors, then the loss of each
predictor would just be increased by cd+1 T , and so the regret RT would just be reduced by cd+1 T ,
making it substantially easier for the learner to achieve sublinear regret. Thus, we prefer the current
regret definition as it promotes the study of regret when there is a price attached to observing the
loss functions.
To motivate our framework, consider the problem of developing a computer-assisted diagnostic tool
to determine what treatment to apply to a patient in a subpopulation of patients. When a patient
arrives, the computer can order a number of tests that cost money, while other information (e.g., the
medical record of the patient) is available for free. Based on the available information, the system
chooses a treatment. Following-up the patient may or may not incur additional cost. In this example,
there is typically a delay in obtaining the information whether the treatment was effective. However,
for simplicity, in this work we have decided not to study the effect of this delay. Several works in
the literature show that delays usually increase the regret in a moderate fashion (Mesterharm, 2005;
Weinberger and Ordentlich, 2006; Agarwal and Duchi, 2011; Joulani et al., 2013).
As another example, consider the problem of product testing in a manufacturing process (e.g., the
production of electronic consumer devices). When the product arrives, it can be subjected to a
large number of diagnostic tests that differ in terms of their costs and effectiveness. The goal is to
predict whether the product is defect-free. Obtaining the ground truth can also be quite expensive,
especially for complex products. The challenge is that the effectiveness of the various tests is often
a priori unknown and that different tests may provide complementary information (meaning that
many tests may be required). . Hence, it might be challenging to decide what form the most costeffective diagnostic procedure may take. Yet another example is the problem of developing a costeffective way of instrument calibration. In this problem, the goal is to predict one or more real-valued
parameters of some product. Again, various tests with different costs and reliability can be used as
the input to the predictor.
Finally, although we pose the task as an online learning problem, it is easy to show that the procedures we develop can also be used to attack the batch learning problem, when the goal is to learn a
predictor that will be cost-efficient on future data given a database of examples.
Obviously, when observing the loss is costly, the problem is related to active learning. However, to
our best knowledge, the case when observing the features is costly has not been studied before in
the online learning literature. Section 1.1 will discusses the relationship of our work to the existing
literature in more detail.
This paper analyzes two versions of the online problem. In the first version, free-label online probing, there is no cost to seeing the loss function, that is, cd+1 = 0. (The loss function often compares
the predicted value with some label in a known way, in which case learning the value of the label
for the round means that the whole loss function becomes known; hence the choice of the name.)
Thus, the learner naturally will choose to see the loss function after he provides his prediction; this
provides feedback that the learner can use, to improve the predictor he produces. In the second
version, non-free-label online probing, the cost of seeing the loss function is positive: cd+1 > 0.
In Section
p 2 we study the case of free-label online probing. We give an algorithm that enjoys a regret
of O( 2d LT ln NT (1/(T L))) when the losses are L-equi-Lipschitz (Theorem
p 2.2), where NT (")
?
is the "-covering number of F on sequences of length T . This leads to an O( 2d LT ) regret bound
for typical function classes, such as the class of linear predictors with bounded weights and bounded
inputs. We also show that, in the worst case, the exponential dependence on the dimension cannot
be avoided in the bound. For the specialpcase of linear prediction with quadratic loss, we give an
? dt), a vast improvement in the dependence on d.
algorithm whose regret scales only as O(
The case of non-free-label online probing is treated in Section 3. Here, in contrast to the free-label
? 2/3 ). The increase of
case, we prove that the minimax growth rate of the regret is of the order ?(T
regret-rate stems from the fact that the ?best competitor in hindsight? does not have to pay for the
label. In contrast to the previous case, since the label is costly here, if the algorithm decides to see the
2
label it does not even have to reason about which features to observe, as querying the label requires
paying a cost that is a constant over the cost of the best predictor in hindsight, already resulting in
? 2/3 ) regret rate. However, in practice (for shorter horizons) it still makes sense to select the
the ?(T
features that provide the best balance between the feature-cost and the prediction loss. Although we
do not study this, we note that by combining the algorithmic ideas developed for the free-label case
with the ideas developed for the non-free-label case, it is possible to derive an algorithm that reasons
actively about the cost of observing the features, too.
In the part dealing with the free-label problem, we build heavily on the results of Mannor and
Shamir (2011), while in the part dealing with the non-free-label problem we build on the ideas of
(Cesa-Bianchi et al., 2006). Due to space limitations, all of our proofs are relegated to the appendix.
1.1
Related Work
This paper analyzes online learning when features (and perhaps labels) have to be purchased. The
standard ?batch learning? framework has a pure explore phase, which gives the learner a set of
labeled, completely specified examples, followed by a pure exploit phase, where the learned predictor is asked to predict the label for novel instances. Notice the learner is not required (nor even
allowed) to decide which information to gather. By contrast, ?active (batch) learning? requires
the learner to identify that information (Settles, 2009). Most such active learners begin with completely specified, but unlabeled instances; they then purchase labels for a subset of the instances.
Our model, however, requires the learner to purchase feature values as well. This is similar to the
?active feature-purchasing learning? framework (Lizotte et al., 2003). This is extended in Kapoor
and Greiner (2005) to a version that requires the eventual predictor (as well as the learner) to pay
to see feature values as well. However, these are still in the batch framework: after gathering the
information, the learner produces a predictor, which is not changed afterwards.
Our problem is an online problem over multiple rounds, where at each round the learner is required
to predict the label for the current example. Standard online learning algorithms typically assume
that each example is given with all the features. For example, Cesa-Bianchi et al. (2005) provided
upper and lower bounds on the regret where the learner is given all the features for each example,
but must pay for any labels he requests. In our problem, the learner must pay to see the values of
the features of each example as well as the cost to obtain its true label at each round. This cost
model means there is an advantage to finding a predictor that involves few features, as long as it
is sufficiently accurate. The challenge, of course, is finding these relevant features, which happens
during this online learning process.
Other works, in particular Rostamizadeh et al. (2011) and Dekel et al. (2010), assume the features
of different examples might be corrupted, missed, or partially observed due to various problems,
such as failure in sensors gathering these features. Having such missing features is realistic in many
applications. Rostamizadeh
et al. (2011) provided an algorithm for this task in the online settings,
p
with optimal O( T ) regret where T is the number of rounds. Our model differs from this model as
in our case the learner has the option to obtain the values of only the subset of the features that he
selects.
2
Free-Label Probing
In this section we consider the case when the cost of observing the loss function is zero. Thus,
we can assume without loss of generality that the learner receives the loss function at the end of
each round (i.e., st,d+1 = 1). We will first consider the general setting where the only restriction is
that the losses are equi-Lipschitz and the function set F has a finite empirical worst-case covering
number. Then we consider the special case where the set of competitors are the linear predictors and
the losses are quadratic.
2.1
The Case of Lipschitz losses
In this section we assume that the loss functions, `t , are Lipschitz with a known, common Lipschitz
constant L over Y w.r.t. to some semi-metric dY of Y: for all t 1
sup |`t (y) `t (y 0 )| ? L dY (y, y 0 ).
(1)
y,y 0 2Y
3
Clearly, the problem is an instance of prediction with expert advice under partial information feedback (Auer et al., 2002), where each expert corresponds to an element of F. Note that, if the learner
chooses to observe the values of some features, then he will also be able to evaluate the losses of
all the predictors f 2 F that use only these selected features. This can be formalized as follows:
By a slight abuse of notation let st 2 {0, 1}d be the indicator showing the features selected by
the learner at time t (here we drop the last element of st as st,d1 is always 1); similarly, we will
drop the last coordinate of the cost vector c throughout this section. Then, the learner can compute the loss of any predictor f 2 F such that s(f ) ? st , where ? denotes the conjunction of the
component-wise comparison. However, for some loss functions, it may be possible to estimate the
losses of other predictors, too. We will exploit this when we study some interesting special cases of
the general problem. However, in general, it is not possible to infer the losses for functions such that
st,i < s(f )i for some i (cf. Theorem 2.3).
The idea is to study first the case when F is finite and then reduce the general case to the finite case
by considering appropriate finite coverings of the space F. The regret will then depend on how the
covering numbers of the space F behave.
Mannor and Shamir (2011) studied problems similar to this in a general framework, where in addition to the loss of the selected predictor (expert), the losses of some other predictors are also
communicated to the learner in every round. The connection between the predictors is represented
by a directed graph whose nodes are labeled as elements of F (i.e., as the experts) and there is an
edge from f 2 F to g 2 F if, when choosing f , the loss of g is also revealed to the learner. It is
assumed that the graph of any round t, Gt = (F, Et ) becomes known to the learner at the beginning
of the round. Further, it is also assumed that (f, f ) 2 Et for every t
1 and f 2 F. Mannor
and Shamir (2011) gave an algorithm, called ELP (exponential weights with linear programming),
to solve this problem, which calls the Exponential Weights algorithm, but modifies it to explore
less, exploiting the information structure of the problem. The exploration distribution is found by
solving a linear program, explaining the name of the algorithm. The regret of ELP is analyzed in the
following theorem.
Theorem 2.1 (Mannor and Shamir 2011). Consider a prediction with expert advice problem over
F where in round t, Gt = (F, Et ) is the directed graph that encodes which losses become available
to the learner. Assume that for any t 1, at most (Gt ) cliques of Gt can cover all vertices of Gt .
Let B be a bound on the non-negative losses `t : maxt 1,f 2F `t (f (xt )) ? B. Then, there exists
a constant CELP > 0 such that for any T > 0, the regret of Algorithm 2 (shown in the Appendix)
when competing against the best predictor using ELP satisfies
v
u
T
u
X
E[R ] ? C
B t(ln |F|)
(G ) .
(2)
T
ELP
t
t=1
The algorithm?s computational cost in any given round is poly(|F|).
.
For a finite F, define Et ? E = {(f, g) | s(g) ? s(f )}. Then clearly, (Gt ) ? 2d . Further,
.
B = kc1:d k1 + maxt 1,y2Y `t (y) = C1 + `max (i.e., C1 = kc1:d k1 ). Plugging these into (2) gives
q
E[RT ] ? CELP (C1 + `max ) 2d T ln |F| .
(3)
To apply this algorithm in the case when F is infinite, we have to approximate F with a finite
set F 0 ? {f | f : X ! Y}. The worst-case maximum approximation error of F using F 0 over
sequences of length T can be defined as
AT (F 0 , F) = max sup 0inf 0
x2X T f 2F f 2F
T
1X
dY (f (xt ), f 0 (xt )) + h (s(f 0 )
T t=1
s(f ))+ , c1:d i ,
where (s(f 0 ) s(f ))+ denotes the coordinate-wise positive part of s(f 0 ) s(f ), that is, the indicator
vector of the features used by f 0 and not used by f . The average error can also be viewed as a
(normalized) dY -?distance? between the vectors (f (xt ))1?t?T and (f 0 (xt ))1?t?T penalized with
the extra feature costs. For a given positive number ?, define the worst-case empirical covering
number of F at level ? and horizon T > 0 by
NT (F, ?) = min{ |F 0 | | F 0 ? {f | f : X ! Y}, AT (F 0 , F) ? ? }.
4
We are going to apply the ELP algorithm to F 0 and apply (3) to obtain a regret bound. If f 0 uses
more features than f then the cost-penalized distance between f 0 and f is bounded from below by
the cost of observing the extra features. This means that unless the problem is very special, F 0 has
to contain, for all s 2 {s(f ) | f 2 F}, some f 0 with s(f 0 ) = s. Thus, if F contains a function for
all s 2 {0, 1}d , (Gt ) = 2d . Selecting a covering F 0 that achieves accuracy ?, the approximation
error becomes T L? (using equation 1), giving the following bound:
Theorem 2.2. Assume that the losses (`t )t 1 are L-Lipschitz (cf. (1)) and ? > 0. Then, there exists
an algorithm such that for any T > 0, knowing T , the regret satisfies
q
E[RT ] ? CELP (C1 + `max ) 2d T ln NT (F, ?) + T L? .
In particular, by choosing ? = 1/(T L), we have
q
E[RT ] ? CELP (C1 + `max ) 2d T ln NT (F, 1/(T L)) + 1 .
We note in passing that the the dependence of the algorithm on the time horizon T can be alleviated,
using, for example, the doubling trick.
In order to turn the above bound into a concrete bound, one must investigate the behavior of the
metric entropy, ln NT (F, ?). In many cases, the metric entropy can be bounded independently of
T . In fact, often, ln NT (F, ?) = D ln(1 + c/?) for some c, D > 0. When this holds, D is often
called the ?dimension? of F and we get that
q
E [RT ] ? CELP (C1 + `max ) 2d T D ln(1 + cT L) + 1 .
As a specific example, we will consider the case of real-valued linear functions over a ball in a
Euclidean space with weights belonging to some other ball. For a normed vector space V with norm
k ? k and dual norm k ? k? , x 2 V , r 0, let Bk?k (x, r) = {v 2 V | kvk ? r} denote the ball in V
centered at x that has radius r. For X ? Rd , W ? Rd , let
.
F ? Lin(X , W) = {g : X ! R | g(?) = h w, ? i , w 2 W}
(4)
be the space of linear mappings from X to reals with weights belonging to W. We have the following
lemma:
Lemma 2.1. Let X, W > 0, dY (y, y 0 ) = |y y 0 |, X ? Bk?k (0, X) and W ? Bk?k? (0, W ).
Consider a set of real-valued linear predictors F ? Lin(X , W). Then, for any ? > 0,
ln NT (F, ?) ? d ln(1 + 2W X/?).
The previous lemma, together with Theorem 2.2 immediately gives the following result:
Corollary 2.1. Assume that F ? Lin(X , W), X ? Bk?k (0, X), W ? Bk?k? (0, W ) for some
X, W > 0. Further, assume that the losses (`t )t 1 are L-Lipschitz. Then, there exists an algorithm
such that for any T > 0, the regret of the algorithm satisfies,
q
E [RT ] ? CELP (C1 + `max ) d2d T ln(1 + 2T LW X) + 1 .
Note that if one is given an a priori bound p on the maximum number of features that can be used
in a single round (allowing the algorithm
p, but not more features) then 2d in
P to use dfewer than
p
the above bound could be replaced by 1?i?p i ? d , where the approximation assumes that
p < d/2. Such a bound on the number of features available per round may arise from strict budgetary considerations. When dp is small, this makes the bound non-vacuous even for small horizons
T . In addition, in such cases the algorithm also becomes computationally feasible. It remains an
interesting open question to study the computational complexity when there is no restriction on the
number of features used. In the next theorem, however, we show that the worst-case exponential
dependence of the regret on the number of features cannot be improved (while keeping the root-T
dependence on the horizon). The bound is based on the lower bound construction of Mannor and
Shamir (2011), which reduces the problem to known lower bounds in the multi-armed bandit case.
Theorem 2.3. There
exist an instance
of free-label online probing such that the minimax regret of
?q
?
any algorithm is ?
d
d/2
T .
5
2.2
Linear Prediction with Quadratic Losses
In this section, we study the problem under the assumption that the predictors have a linear form and
the loss functions are quadratic. That is, F ? Lin(X , W) where W = {w 2 Rd | kwk? ? wlim }
and X = {x 2 Rd | kxk ? xlim } for some given constants wlim , xlim > 0, while `t (y) = (y yt )2 ,
where |yt | ? xlim wlim . Thus, choosing a predictor is akin to selecting a weight vector wt 2 W,
as well as a binary vector st 2 G ? {0, 1}d that encodes the features to be used in round t. The
prediction for round t is then y?t = h wt , st xt i, where denotes coordinate-wise product, while
the loss suffered is (?
yt yt )2 . The set G is an arbitrary non-empty, a priori specified subset of {0, 1}d
that allows the user of the algorithm to encode extra constraints on what subsets of features can be
selected.
p
? poly(d)T ) is possible. The key
In this section we show that in this case a regret bound of size O(
idea that permits the improvement of the regret bound is that a randomized choice of a weight vector
Wt (and thus, of a subset) helps one construct unbiased estimates of the losses `t (h w, s xt i)
for all weight vectors w and all subsets s 2 G under some mild conditions on the distribution of
Wt . That the construction of such unbiased estimates is possible, despite that some feature values
are unobserved, is because of the special algebraic structure of the prediction and loss functions.
A similar construction has appeared in a different context, e.g., in the paper of Cesa-Bianchi et al.
(2010).
The construction works as follows. Define the d?d matrix, Xt by (Xt )i,j = xt,i xt,j (1 ? i, j ? d).
Expanding the loss of the prediction y?t = h w, xt i, we get that the loss of using w 2 W is
.
`t (w) = `t (h w, xt i) = w> Xt w 2 w> xt yt + yt2 ,
where with a slight abuse of notation we have introduced the loss function `t : W ! R (we?ll keep
abusing the use of `t by overloading it based on the type of its argument). Clearly, it suffices to
construct unbiased estimates of `t (w) for any w 2 W.
We will use a discretization approach. Therefore, assume that we are given a finite subset W 0 of
W that will be constructed later. In each step t, our algorithm will choose a random weight vector
Wt from a probability distribution supported on W 0 . Let pt (w) be the probability of selecting the
weight vector, w 2 W 0 . For 1 ? i ? d, let
X
qt (i) =
pt (w) ,
w2W 0 :i2s(w)
be the probability that s(Wt ) will contain i, while for 1 ? i, j ? d, let
X
qt (i, j) =
pt (w) ,
w2W 0 :i,j2s(w)
be the probability that both i, j 2 s(Wt ).2 Assume that pt (?) is constructed such that qt (i, j) > 0
holds for any time t and indices 1 ? i, j ? d. This also implies that qt (i) > 0 for all 1 ? i ? d.
? t 2 Rd?d using the following equations:
Define the vector x
?t 2 Rd and matrix X
{i2s(Wt )} xt,i
? t )i,j = {i,j2s(Wt )} xt,i xt,j .
x
?t,i =
,
(X
(5)
qt (i)
qt (i, j)
h
i
? t | pt = Xt . Further, notice that both x
It can be readily verified that E [?
xt | pt ] = xt and E X
?t
?
and Xt can be computed based on the information available at the end of round t, i.e., based on the
feature values (xt,i )i2s(Wt ) . Now, define the estimate of prediction loss
?t w
`?t (w) = w> X
2 w> x
?t yt + yt2 .
(6)
Note that yt can be readily computed from `t (?), which is available to the algorithm (equivalently,
weh may assume
that the algorithm observed yt ). Due to the linearity of expectation, we have
i
?
E `t (w)|pt = `t (w). That is, `?t (w) provides an unbiased estimate of the loss `t (w) for any
w 2 W. Hence, by adding a feature cost term we get `?t (w) + h s(w), c i as an estimate of the loss
that the learner would have suffered at round t had he chosen the weight vector w.
2
Note that, following our earlier suggestion, we view the d-dimensional binary vectors as subsets of
{1, . . . , d}.
6
Algorithm 1 The LQDE XP 3 Algorithm
Parameters: Real numbers 0 ? ?, 0 < ? 1, W 0 ? W finite set, a distribution ? over W 0 ,
horizon T > 0.
Initialization: u1 (w) = 1 (w 2 W 0 ).
for t = 1 to T do
Draw Wt 2 W 0 from the probability mass function
ut (w)
+ ?(w),
Ut
Obtain the features
values, (xt,i )i2s(Wt ) .
P
Predict y?t = i2s(Wt ) wt,i xt,i .
for w 2 W 0 do
Update the weights using (6) for the definitions of `?t (w):
pt (w) = (1
)
ut+1 (w) = ut (w)e
?(`?t (w)+h c,s(w) i)
end for
end for
2.2.1
,
w 2 W0 .
w 2 W0 .
LQDExp3 ? A Discretization-based Algorithm
Next we show
p that the standard E XP3 Algorithm applied to a discretization of the weight space W
achieves O( dT ) regret. The algorithm, called LQDE XP 3 is given as Algorithm 1. In the name
of the algorithm, LQ stands for linear prediction with quadratic losses and D denotes discretization.
Note
P that if the exploration distribution ? in the algorithm is such that for any 1 ? i, j ? d,
w2W 0 :i,j2s(w) ?(w) > 0 then qt (i, j) > 0 will be guaranteed for all time steps. Using the notation
ylim = wlim xlim and EG = maxs2G supw2W:kwk? =1 kw sk? , we can state the following regret
bound on the algorithm
Theorem 2.4. Let wlim , xlim > 0, c 2 [0, 1)d be given, W ? Bk?k? (0, wlim ) convex, X ?
Bk?k (0, xlim ) and fix T
1. Then, there exist a parameter setting for LQDE XP 3 such that the
following holds: Let RT denote the regret of LQDE XP 3 against the best linear predictor from
Lin(W, X ) when LQDE XP 3 is used in an online free-label probing problem defined with the sequence ((xt , yt ))1?t?T (kxt k ? xlim , |yt | ? ylim , 1 ? t ? T ), quadratic losses `t (y) = (y yt )2 ,
and feature-costs given by the vector c. Then,
q
2 + kck )(w 2 x2 + 2y
2
E [RT ] ? C T d (4ylim
1
lim wlim xlim + 4ylim + kck1 ) ln(EG T ) ,
lim lim
where C > 0 is a universal constant (i.e., the value of C does not depend on the problem parameters).
The actual parameter setting to be used with the algorithm is constructed in the proof. The computational complexity of LQDE XP 3 is exponential in the dimension d due to the discretization step,
hence quickly becomes impractical when the number of features is large. On the other hand, one
can easily modify the algorithm to run without discretization by replacing E XP3 with its continuous
version. The resulting algorithm enjoys essentially the same regret bound, and can be implemented
efficiently whenever efficient sampling is possible from the resulting distribution. This approach
seems to be appealing, since, from a first look, it seems to involve sampling from truncated Gaussian distributions, which can be done efficiently. However, it is easy to see that when the sampling
? t may not be posprobabilities of some feature are small, the estimated loss will not be convex as X
itive semi-definite, and therefore the resulting distributions will not always be truncated Gaussians.
Finding an efficient sampling procedure for such situations is an interesting open problem.
The optimality of LQDE XP 3 can be seen by the following lower bound on the regret:
Theorem 2.5. Let d > 0, and consider the online free label probing problem with linear predictors,
where W = {w 2 Rd | kwk1 ? wlim } and X = {x 2 Rd | kxk1 ? 1}. Assume, for all t
1,
that the loss functions are of the form `t (w) = (w> xt yt )2 + h s(w), c i, where |yt | ? 1 and
4d
c = 1/2 ? 1 2 Rd . Then, for any prediction algorithm and for any T
8 ln(4/3) , there exists a
7
sequence ((xt , yt ))1?t?T 2 (X ? [ 1, 1])T such that the regret of the algorithm can be bounded
from below as
p
2 1 p
E[RT ] p
Td.
32 ln(4/3)
3
Non-Free-Label Probing
If cd+1 > 0, the learner has to pay for observing the true label. This scenario is very similar to the
well-known label-efficient prediction case in online learning (Cesa-Bianchi et al., 2006). In fact,
the latter problem is a special case of this problem, immediately giving us that the regret of any
algorithm is at least of order T 2/3 . It turns out that if one observes the (costly) label in a given round
then it does not effect the regret rate if one observes all the features at the same time. The resulting
?revealing action algorithm?, given in Algorithm 3 in the Appendix, achieves the following regret
bound for finite expert classes:
Lemma 3.1. Given any non-free-label online probing with finitely many experts, Algorithm 3 with
appropriately set parameters achieves
?
?
p
E[RT ] ? C max T 2/3 (`2max kck1 ln |F|)1/3 , `max T ln |F|
for some constant C > 0.
Using the fact that, in the linear prediction case, approximately (2T LW X + 1)d experts are needed
1
to approximate each expert in W with precision ? = LT
in worst-case empirical covering, we obtain
the following theorem (note, however, that the complexity of the algorithm is again exponential in
the dimension d, as we need to keep a weight for each expert):
Theorem 3.1. Given any non-free-label online probing with linear predictor experts and Lipschitz
prediction loss function with constant L, Algorithm 3 with appropriately set parameters running on
a sufficiently discretized predictor set achieves
?
?
p
?
?1/3
E[RT ] ? C max T 2/3 `2max kck1 d ln(T LW X)
, `max T d ln(T LW X)
for some universal constant C > 0.
That Algorithm 3 is essentially optimal for linear predictions and quadratic losses is a consequence
of the following almost matching lower bound:
Theorem 3.2. There exists a constant C such that, for any non-free-label probing with linear prePd
dictors, quadratic loss, and cj > (1/d) i=1 ci 1/2d for every j = 1, . . . , d, the expected regret
of any algorithm can be lower bounded by
E[RT ] C(cd+1 d)1/3 T 2/3 .
4
Conclusions
We introduced a new problem called online probing. In this problem, the learner has the option
of choosing the subset of features he wants to observe as well as the option of observing the true
label, but has to pay for this information. This setup produced new challenges in solving the online
problem. We p
showed that when the labels are free, it is possible to devise algorithms with optimal
regret rate ?( T ) (up to logarithmic factors), while in the non-free-label case we showed that only
?(T 2/3 ) is achievable. We gave algorithms that achieve the optimal regret rate (up to logarithmic
factors) when the number of experts is finite or in the case of linear prediction. Unfortunately either
our bounds or the computational complexity of the corresponding algorithms are exponential in
the problem dimension, and it is an open problem whether these disadvantages can be eliminated
simultaneously.
Acknowledgements
The authors thank Yevgeny Seldin for finding a bug in an earlier version of the paper. This work was
supported in part by DARPA grant MSEE FA8650-11-1-7156, the Alberta Innovates Technology
Futures, AICML, and the Natural Sciences and Engineering Research Council (NSERC) of Canada.
8
References
Agarwal, A. and Duchi, J. C. (2011). Distributed delayed stochastic optimization. In Shawe-Taylor,
J., Zemel, R. S., Bartlett, P. L., Pereira, F. C. N., and Weinberger, K. Q., editors, NIPS, pages
873?881.
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002). The nonstochastic multiarmed
bandit problem. SIAM J. Comput., 32(1):48?77.
Bart?ok, G. (2012). The role of information in online learning. PhD thesis, Department of Computing
Science, University of Alberta.
Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and games. Cambridge Univ Pr.
Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. (2005). Minimizing regret with label efficient prediction. IEEE Transactions on Information Theory, 51(6):2152?2162.
Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. (2006). Regret minimization under partial monitoring.
Math. Oper. Res., 31(3):562?580.
Cesa-Bianchi, N., Shalev-Shwartz, S., and Shamir, O. (2010). Efficient learning with partially observed attributes. CoRR, abs/1004.4421.
Dekel, O., Shamir, O., and Xiao, L. (2010). Learning to classify with missing and corrupted features.
Machine Learning, 81(2):149?178.
Joulani, P., Gy?orgy, A., and Szepesv?ari, C. (2013). Online learning under delayed feedback. In 30th
International Conference on Machine Learning, Atlanta, GA, USA.
Kapoor, A. and Greiner, R. (2005). Learning and classifying under hard budgets. In European
Conference on Machine Learning (ECML), pages 166?173.
Lizotte, D., Madani, O., and Greiner, R. (2003). Budgeted learning of naive-Bayes classifiers. In
Conference on Uncertainty in Artificial Intelligence (UAI).
Mannor, S. and Shamir, O. (2011). From bandits to experts: On the value of side-observations.
CoRR, abs/1106.2436.
Mesterharm, C. (2005). On-line learning with delayed label feedback. In Proceedings of the 16th
international conference on Algorithmic Learning Theory, ALT?05, pages 399?413, Berlin, Heidelberg. Springer-Verlag.
Rostamizadeh, A., Agarwal, A., and Bartlett, P. L. (2011). Learning with missing features. In UAI,
pages 635?642.
Settles, B. (2009). Active learning literature survey. Technical report.
Weinberger, M. J. and Ordentlich, E. (2006). On delayed prediction of individual sequences. IEEE
Trans. Inf. Theor., 48(7):1959?1976.
9
| 5149 |@word mild:1 innovates:1 version:6 achievable:1 norm:2 seems:2 dekel:2 open:3 contains:1 selecting:3 denoting:1 existing:1 current:2 discretization:6 nt:8 si:2 yet:1 must:3 readily:2 realistic:1 drop:2 update:1 bart:2 intelligence:1 selected:4 device:1 beginning:1 ith:2 prespecified:2 record:1 d2d:1 provides:3 equi:2 mannor:6 node:1 math:1 attack:1 constructed:3 become:1 j2s:3 prove:1 expected:1 behavior:1 nor:1 multi:1 discretized:1 y2y:1 alberta:4 td:1 actual:1 armed:1 considering:1 becomes:5 begin:1 provided:2 competes:1 bounded:6 notation:3 linearity:1 mass:1 what:4 msee:1 substantially:1 developed:2 unobserved:2 csaba:1 hindsight:3 finding:4 impractical:1 every:3 xd:4 growth:1 classifier:1 whatever:1 medical:1 grant:1 rgreiner:1 positive:4 before:1 engineering:1 modify:1 consequence:1 despite:1 abuse:2 approximately:1 might:2 lugosi:3 initialization:1 studied:2 challenging:1 decided:1 practical:1 directed:2 testing:1 practice:1 regret:41 definite:1 differs:1 communicated:1 procedure:3 empirical:3 universal:2 eth:1 significantly:1 revealing:1 alleviated:1 matching:1 subpopulation:1 seeing:2 get:3 cannot:2 unlabeled:1 ga:1 context:1 restriction:2 yt:15 missing:3 modifies:1 urich:1 independently:1 normed:1 convex:2 survey:1 simplicity:1 formalized:1 immediately:2 pure:2 his:4 variation:1 coordinate:3 shamir:8 construction:4 heavily:1 ualberta:2 user:1 programming:1 pt:8 us:2 trick:1 element:3 expensive:1 database:1 labeled:2 observed:4 kxk1:1 role:1 worst:6 russell:1 observes:2 complexity:4 asked:1 motivate:1 depend:2 solving:2 incur:1 learner:39 completely:2 easily:1 darpa:1 various:3 represented:1 univ:1 effective:1 artificial:1 zemel:1 choosing:4 shalev:1 whose:4 quite:1 valued:3 solve:1 otherwise:1 online:30 obviously:1 sequence:5 advantage:1 kxt:1 product:6 relevant:1 combining:1 kapoor:2 achieve:2 bug:1 exploiting:1 empty:1 produce:5 i2s:5 help:2 depending:1 develop:1 derive:1 pose:1 measured:1 finitely:1 x0i:2 qt:7 paying:2 implemented:1 predicted:1 involves:1 come:1 implies:1 differ:1 radius:1 attribute:1 stochastic:1 exploration:2 centered:1 settle:2 suffices:1 fix:1 theor:1 assisted:1 hold:4 sufficiently:2 ground:1 algorithmic:2 predict:5 mapping:1 achieves:5 label:41 sensitive:1 w2w:3 council:1 tool:1 minimization:1 clearly:3 sensor:1 always:2 gaussian:1 conjunction:1 corollary:1 encode:1 improvement:2 costeffective:2 indicates:1 contrast:3 lizotte:2 rostamizadeh:3 sense:1 typically:2 abor:1 bandit:3 relegated:1 going:1 selects:1 dual:1 priori:4 special:5 once:1 construct:2 having:1 sampling:4 eliminated:1 kw:1 look:1 purchase:5 future:2 weh:1 report:1 few:1 simultaneously:1 individual:1 delayed:4 madani:1 bartok:1 replaced:1 phase:2 ab:2 atlanta:1 investigate:1 introduces:1 analyzed:1 arrives:2 kvk:1 accurate:1 edge:1 partial:2 respective:1 shorter:1 unless:1 stoltz:2 euclidean:1 taylor:1 re:1 increased:1 column:1 instance:5 earlier:2 classify:1 cover:1 disadvantage:1 cost:30 vertex:1 subset:10 predictor:30 delay:3 too:2 corrupted:2 chooses:3 st:11 international:2 randomized:1 siam:1 together:1 quickly:1 concrete:1 again:3 thesis:1 cesa:9 choose:2 expert:13 actively:1 oper:1 gy:2 later:1 root:1 view:1 observing:13 sup:2 kwk:2 bayes:1 option:5 accuracy:1 efficiently:2 identify:1 produced:2 monitoring:1 whenever:1 definition:3 against:5 competitor:3 failure:1 naturally:1 proof:2 treatment:3 knowledge:1 ut:4 lim:3 cj:1 auer:2 ok:2 dt:2 improved:1 evaluated:2 done:1 generality:1 just:4 hand:1 receives:1 replacing:1 elp:5 abusing:1 perhaps:1 name:3 effect:2 usa:1 normalized:1 true:3 contain:2 unbiased:4 hence:4 eg:2 round:24 ll:1 during:1 game:1 szepesva:1 covering:7 duchi:2 meaning:1 mesterharm:2 wise:3 novel:1 ari:2 consideration:1 common:1 attached:1 he:12 slight:2 multiarmed:1 cambridge:1 rd:12 similarly:1 shawe:1 had:1 reliability:1 calibration:1 money:1 gt:7 showed:2 inf:4 moderate:1 scenario:1 verlag:1 binary:2 kwk1:1 devise:1 gyorgy:1 analyzes:2 additional:1 seen:1 determine:1 ii:1 semi:2 full:1 afterwards:1 multiple:1 infer:1 stem:1 reduces:1 technical:1 long:1 lin:5 promotes:1 plugging:1 navid:1 prediction:28 variant:2 patient:5 metric:3 expectation:1 essentially:2 agarwal:3 c1:10 szepesv:2 want:2 addition:2 x2x:1 suffered:2 appropriately:2 extra:3 unlike:1 strict:1 effectiveness:2 call:1 revealed:2 easy:2 gave:2 nonstochastic:1 competing:2 reduce:1 idea:5 knowing:1 whether:5 motivated:1 bartlett:2 akin:1 algebraic:1 fa8650:1 passing:1 action:1 involve:1 kc1:2 reduced:1 schapire:1 exist:2 andr:1 notice:2 diagnostic:3 estimated:1 per:1 dropping:1 kck:1 key:1 budgeted:1 verified:1 vast:1 graph:3 defect:1 run:1 uncertainty:1 throughout:2 almost:1 decide:2 electronic:1 missed:1 draw:1 prefer:1 appendix:3 dy:5 bound:24 ct:1 pay:7 followed:1 guaranteed:1 quadratic:8 constraint:1 x2:1 encodes:2 u1:1 argument:1 min:1 optimality:1 department:4 developing:2 request:1 ball:3 belonging:2 remain:1 appealing:1 making:1 happens:1 joulani:2 pr:1 gathering:2 ln:19 equation:2 computationally:1 remains:1 discus:1 turn:2 needed:2 subjected:1 instrument:1 end:5 available:6 gaussians:1 permit:1 apply:4 observe:5 appropriate:1 batch:4 weinberger:3 denotes:5 assumes:1 include:2 cf:2 running:1 yt2:2 zolghadr:2 exploit:2 giving:2 k1:2 especially:1 build:2 purchased:5 already:1 question:1 costly:5 rt:13 dependence:5 dp:1 distance:2 thank:1 berlin:1 w0:2 reason:2 consumer:1 length:2 aicml:1 index:1 relationship:1 balance:1 minimizing:1 equivalently:1 setup:1 unfortunately:1 negative:1 unknown:1 bianchi:9 upper:2 allowing:1 observation:1 finite:10 behave:1 ecml:1 truncated:2 defining:1 extended:1 situation:1 arbitrary:1 canada:1 bk:7 vacuous:1 introduced:2 required:3 specified:3 connection:1 learned:1 nip:1 trans:1 able:2 usually:1 below:2 appeared:1 challenge:3 program:1 including:1 max:13 treated:1 natural:1 indicator:4 minimax:2 improve:1 technology:1 naive:1 literature:4 acknowledgement:1 freund:1 loss:55 sublinear:1 interesting:3 limitation:1 suggestion:1 querying:1 purchasing:2 gather:1 xp:7 xiao:1 editor:1 classifying:1 cd:7 production:1 maxt:2 course:1 changed:1 penalized:2 supported:2 last:3 free:22 transpose:1 keeping:1 enjoys:2 side:1 explaining:1 distributed:1 feedback:4 default:1 dimension:5 stand:1 ordentlich:2 kck1:3 author:1 avoided:1 transaction:1 approximate:2 keep:2 dealing:2 clique:1 active:5 decides:1 uai:2 assumed:2 xi:2 shwartz:1 continuous:1 sk:1 learn:1 ca:2 expanding:1 obtaining:2 heidelberg:1 orgy:2 complex:1 poly:2 european:1 did:2 whole:1 yevgeny:1 arise:1 allowed:1 complementary:1 x1:4 advice:2 fashion:1 probing:16 precision:1 pereira:1 exponential:7 lq:1 comput:1 lw:4 theorem:13 xt:33 specific:1 itive:1 showing:1 alt:1 exists:5 overloading:1 adding:1 corr:2 ci:2 phd:1 budget:1 horizon:6 easier:1 entropy:2 lt:3 logarithmic:2 explore:2 seldin:1 greiner:4 kxk:1 nserc:1 partially:2 doubling:1 springer:1 ch:1 corresponds:1 truth:1 satisfies:3 goal:4 viewed:1 manufacturing:1 eventual:1 price:1 lipschitz:8 feasible:1 hard:1 typical:1 infinite:1 wt:14 lemma:4 called:5 select:1 latter:1 ethz:1 evaluate:1 d1:1 |
4,586 | 515 | Linear Operator for Object Recognition
Ronen Bssri
Shimon Ullman?
M.I.T. Artificial Intelligence Laboratory
and Department of Brain and Cognitive Science
545 Technology Square
Cambridge, MA 02139
Abstract
Visual object recognition involves the identification of images of 3-D objects seen from arbitrary viewpoints. We suggest an approach to object
recognition in which a view is represented as a collection of points given
by their location in the image. An object is modeled by a set of 2-D views
together with the correspondence between the views. We show that any
novel view of the object can be expressed as a linear combination of the
stored views. Consequently, we build a linear operator that distinguishes
between views of a specific object and views of other objects. This operator can be implemented using neural network architectures with relatively
simple structures.
1
Introduction
Visual object recognition involves the identification of images of 3-D objects seen
from arbitrary viewpoints. In particular, objects often appear in images from previously unseen viewpoints. In this paper we suggest an approach to object recognition
in which rigid objects are recognized from arbitrary viewpoint. The method can be
implemented using neural network architectures with relatively simple structures.
In our approach a view is represented as a collection of points given by their location in the image, An object is modeled by a small set of views together with the
correspondence between these views. We show that any novel view of the object
? Also, Weizmann Inst. of Science, Dept. of Applied Math., Rehovot 76100, Israel
452
Linear Operator for Object Recognition
can be expressed as a linear combination of the stored views. Consequently, we
build a linear operator that distinguishes views of a specific object from views of
other objects. This operator can be implemented by a neural network.
The method has several advantages. First, it handles correctly rigid objects, but is
not restricted to such objects. Second, there is no need in this scheme to explicitly
recover and represent the 3-D structure of objects. Third, the computations involved
are often simpler than in previous schemes.
2
Previous Approaches
Object recognition involves a comparison of a viewed image against object models
stored in memory. Many existing schemes to object recognition accomplish this task
by performing a template comparison between the image and each of the models,
often after compensating for certain variations due to the different positions and
orientations in which the object is observed. Such an approach is called alignment
(Ullman, 1989), and a similar approach is used in (Fischler &, Bolles 1981, Lowe
1985, Faugeras &, Hebert 1986, Chien &, Aggarwal 1987, Huttenlocher &, Ullman
1987, Thompson &, Mundy 1987).
The majority of alignment schemes use object-centered representations to model the
objects. In these models the 3-D structure of the objects is explicitly represented.
The acquisition of models in these schemes therefore requires a separate process to
recover the 3-D structure of the objects.
A number of recent studies use 2-D viewer-centered representations for object recognition. Abu-Mostafa &, Pslatis (1987), for instance, developed a neural network that
continuously collects and stores the observed views of objects. When a new view
is observed it is recognized if it is sufficiently similar to one of the previously seen
views. The system is very limited in its ability to recognize objects from novel
views. It does not use information available from a collection of object views to
extend the range of recognizable views beyond the range determined by each of the
stored views separately.
In the scheme below we suggest a different kind of viewer-centered representations
to model the objects. An object is modeled by a set of its observed images with the
correspondence between points in the images. We show that only a small number
of images is required to predict the appearance of the object from all possible
viewpoints. These predictions are exact for rigid objects, but are not confined to
such objects. We also suggest a neural network to implement the scheme.
A similar representation was recently used by Poggio &, Edelman (1990) to develop a
network that recognizes objects using radial basis functions (RBFs). The approach
presented here has several advantages over this approach. First, by using the linear
combinations of the stored views rather than applying radial basis functions to
them we obtain exact predictions for the novel appearances of objects rather than
an approximation. Moreover, a smaller number of views is required in our scheme to
predict the appearance of objects from all possible views. For example, when a rigid
object that does not introduce self occlusion (such as a wired object) is considered,
predicting its appearance from all possible views requires only three views under
the LC Scheme and about sixty views under the RBFs Scheme.
453
454
Basri and Ullman
3
The Linear Combinations (LC) Scheme
In this section we introduce the Linear Combinations (LC) Scheme. Additional
details about the scheme can be found in (Ullman & Basri, 1991). Our approach is
based on the following observation. For many continuous transformations of interest
in recognition, such as 3-D rotation, translation, and scaling, every possible view
of a transforming object can be expressed as a linear combination of other views of
the object. In other words, the set of possible images of an object undergoing rigid
3-D transformations and scaling is embedded in a linear space, spanned by a small
number of 2-D images.
We start by showing that any image of an object undergoing rigid transformations
followed by an orthographic projection can be expressed as a linear combination of
a small number of views. The coefficients of this combination may differ for the xand y-coordinates. That is, the intermediate view of the object may be given by two
linear combinations, one for the x-coordinates and the other for the y-coordinates.
In addition, certain functional restrictions may hold among the different coefficients.
We represent an image by two coordinate vectors, one contains the x-values of the
object's points, and the other contains their y-values. In other words, an image P
is described by x (XlJ ... , xn) and y
(Yll ... , Yn) where every (Xi, Yi), 1 < i ~ n,
is an image point. The order of the points in these vectors is preserved in all the
different views of the same object, namely, if P and pI are two views of the same
object, then (Xi, Yi) E P and (x~, yD E pI are in correspondence (or, in other words,
they are the projections of the same object point).
=
=
Claim:
The set of coordinate vectors of an object obtained from all different
viewpoints is embedded in a 4-D linear space.
(A proof is given in Appendix A.)
Following this claim we can represent the entire space of views of an object by a
basis that consists of any four linearly independent vectors taken from the space.
In particular, we can construct a basis using familiar views of the object. Two
images supply four such vectors and therefore are often sufficient to span the space.
By considering the linear combinations of the model vectors we can reproduce any
possible view of the object.
It is important to note that the set of views of a rigid object does not occupy the
entire linear 4-D space. Rather, the coefficients of the linear combinations reproducing valid images follow in addition two quadratic constraints. (See Appendix
A.) In order to verify that an object undergoes a rigid transformation (as opposed
to a general 3-D affine transformation) the model must consist of at least three
snapshots of the object.
Many 3-D rigid objects are bounded with smooth curved surfaces. The contours of
such objects change their position on the object whenever the viewing position is
changed. The linear combinations scheme can be extended to handle these objects
as well. In this cases the scheme gives accurate approximations to the appearance
of these objects (Ullman & Basri, 1991).
The linear combination scheme assumes that the same object points are visible in
the different views. When the views are sufficiently different, this will no longer hold,
Linear Operator for Object Recognition
due to self-occlusion. To represent an object from all possible viewing directions
(e.g., both "front" and "back"), a number of different models of this type will be
required. This notion is similar to the use of different object aspects suggested by
Koenderink & Van Doorn (1979). (Other aspects of occlusion are discussed in the
next section.)
4
Recognizing an Object Using the LC Scheme
In the previous section we have shown that the set of views of a rigid object is
embedded in a linear space of a small dimension. In this section we define a linear
operator that uses this property to recognize objects. We then show how this
operator can be used in the recognition process.
Let PI, ... , Pk be the model views, and P be a novel view of the same object.
According to the previous section there exist coefficients a}, ... , ak such that:
P
L:~=1 aiPi. Suppose L is a linear operator such that LPi
q for every
1 < i ~ n and some constant vector q, then L transforms P to q (up to a scale
factor), Lp
(L:~=1 ai)q. If in addition L transforms vectors outside the space
spanned by the model to vectors other then q then L distinguishes views of the
object from views of other objects. The vector q then serves as a "name" for the
object. It can either be the zero vector, in which case L transforms every novel view
of the object to zero, or it can be a familiar view of the object, in which case L has
an associative property, namely, it takes a novel view of an object and transforms
it to a familiar view. A constructive definition of L is given in appendix B.
=
=
=
The core of the recognition process we propose includes a neural network that implements the linear operator defined above. The input to this network is a coordinate
vector created from the image, and the output is an indication whether the image
is in fact an instance of the modeled object. The operator can be implemented
by a simple, one layer, neural network with only feedforward connections, the type
presented by Kohonen, Oja, & Lehtio (1981) . It is interesting to note that this
operator can be modified to recognize several models in parallel.
To apply this network to the image the image should first be represented by its
coordinate vectors. The construction of the coordinate vectors from the image can
be implemented using cells with linear response properties, the type of cells encoding
eye positions found by Zipser & Andersen (1988). The positions obtained should
be ordered according to the correspondence of the image points with the model
points. Establishing the correspondence is a difficult task and an obstacle to most
existing recognition schemes. The phenomenon of apparent motion (Marr & Ullman
1981) suggests, however, that the human visual system is capable of handling this
problem.
In many cases objects seen in the image are partially occluded. Sometimes also some
of the points cannot be located reliably. To handle these cases the linear operator
should be modified to exclude the missing points. The computation of the updated
operator from the original one involves computing a pseudo-inverse. A method to
compute the pseudo-inverse of a matrix in real time using neural networks has been
suggested by Yeates (1991).
455
456
Basri and Ullman
5
Summary
We have presented a method for recognizing 3-D objects from 2-D images. In this
method, an object-model is represented by the linear combinations of several 2-D
views of the object. It has been shown that for objects undergoing rigid transformations the set of possible images of a given object is embedded in a linear
space spanned by a small number of views. Rigid transformations can be distinguished from more general linear transformations of the object by testing certain
constraints placed upon the coefficients of the linear combinations. The method
applies to objects with sharp as well as smooth boundaries.
We have proposed a linear operator to map the different views of the same object
into a common representation, and we have presented a simple neural network
that implements this operator. In addition, we have suggested a scheme to handle
occlusions and unreliable measurements. One difficulty in this scheme is that it
requires to find the correspondence between the image and the model views. This
problem is left for future research.
The linear combination scheme described above was implemented and applied to a
number of objects. Figures 1 and 2 show the application of the linear combinations
method to artificially created and real life objects. The figures show a number of
object models, their linear combinations, and the agreement between these linear
combinations and actual images of the objects. Figure 3 shows the results of applying a linear operator with associative properties to artificial objects. It can be
seen that whenever the operator is fed with a novel view of the object for which it
was designed it returns a familiar view of the object.
1'\
/ ;II \ \
I
<
I
\
\
/
\
---~-:----~
Figure 1: Top: three model pictures of a pyramid. Bottom: two of their linear combinations.
Appendix A
In this appendix we prove that the coordinate vectors of images of a rigid object lie
in a 4-D linear space. We also show that the coefficients of the linear combinations
that produce valid images of the object follow in addition two quadratic constraints.
Let 0 be a set of object points, and let x = (Xl, ... , X n ), Y
= (Yl, ... , Yn),
and
Linear Operator for Object Recognition
Figure 2: Top: three model pictures of a VW car. Bottom: a linear combination of the
three images (left), an actual edge image (middle), and the two images overlayed (right).
//
.
\
I
I
I
/
\
\
\
\
--::--~
Figure 3: Top: applying an associative pyramidal operator to a pyramid (left) returns a
model view of the pyramid (right, compare with Figure 1 top left). Bottom: applying the
same operator to a cube (left) returns an unfamiliar image (right).
457
458
Basri and Ullman
=
z (Zl, ... , zn) such that (Xi, Yi, Zi) E 0 for every 1 ~ i < n. Let P be a view of the
object, and let x (Xl, ... , xn) and y (!Ill ... , !In) such that (Xi, !Ii) is the position
of (Xi, Yi, Zi) in P. We call x, y, and z the coordinate vectors of 0, and x and y the
corresponding coordinate vectors in P. Assume P is obtained from 0 by applying
a rotation matrix R, a scale factor s, and a translation vector (t~, ty) followed by
an orthographic projection.
=
Claim:
=
There exist coefficients at, a2, aa, a4 and bl, b2, ba, b4 such that:
x
alx+a2y+aaZ+a41
bl x+b 2y+ba z+b41
y
where 1
= (1, ... , 1) E 1?,".
Proof:
Simply by assigning:
al
a2
aa
a4
-
h
srll
sr12
srla
b2
ba
b4
t~
sr21
sr22
sr2a
ty
Therefore, x, y E span{x, y, z, I} regardless of the viewpoint from which x and
yare taken. Notice that the set of views of a rigid object does not occupy the
entire linear 4-D space. Rather, the coefficients follow in addition two quadratic
constraints:
a~ + a~ + a; = b~ + b~ + b;
alb l + a2b2 + aaba = 0
Appendix B
A "recognition matrix" is defined as follows. Let {PI, ... , Pk} be a set of k linearly
independent vectors representing the model pictures. Let {Pk+t, ... , Pn} be a set of
vectors such that {pt, ... , Pn} are all linearly independent. We define the following
matrices:
P
Q
(Pl, .. ?,Pk,Pk+l, ''',Pn)
(q, .. ?,q,Pk+t, .. ?,Pn)
We require that:
LP=Q
Therefore:
L = QP- l
Note that since P is composed of n linearly independent vectors, the inverse matrix
p- l exists, therefore L can always be constructed.
Acknowledgments
We wish to thank Yael Moses for commenting on the final version of this paper.
This report describes research done at the Massachusetts Institute of Technology
within the Artificial Intelligence Laboratory. Support for the laboratory's artificial
Linear Operator for Object Recognition
intelligence research is provided in part by the Advanced Research Projects Agency
of the Department of Defense under Office of Naval Research contract N0001485-K-0124. Ronen Basri is supported by the McDonnell-Pew and the Rothchild
postdoctoral fellowships.
References
Abu-Mostafa, Y.S. & Pslatis, D. 1987. Optical neural computing. Scientific American, 256, 66-73.
Chien, C.H. & Aggarwal, J.K., 1987. Shape recognition from single silhouette.
Proc. of ICCV Conf. (London) 481-490.
Faugeras, O.D. & Hebert, M., 1986. The representation, recognition and location
of 3-D objects. Int. J. Robotics Research, 5(3), 27-52.
Fischler, M.A. & Bolles, R.C., 1981. Random sample consensus: a paradigm for
model fitting with application to image analysis and automated cartography.
Communications of the ACM, 24(6), 381-395.
Huttenlocher, D.P. & Ullman, S., 1987. Object recognition using alignment. Proc.
of ICCV Conf. (London), 102-111.
Koenderink, J.J. & Van Doorn, A.J., 1979. The internal representation of solid
shape with respect to vision. Bioi. Cybernetics 32, 211-216.
Kohonen, T., Oja, E., & Lehtio, P., 1981. Storage and processing of information in distributed associative memory systems. in Hinton, G.E. (3 Anderson,
J.A., Parallel Models of Associative Memory. Hillsdale, NJ: Lawrence Erlbaum
Associates, 105-143.
Lowe, D.G., 1985. Perceptual Organization and Visual Recognition. Boston:
Kluwer Academic Publishing.
Man, D. & Ullman, S., 1981. Directional selectivity and its use in early visual
processing. Proc. R. Soc. Lond. B 211, 151-180.
Poggio, T. & Edelman, S., 1990. A network that learns to recognize three dimensionalobjects. Nature, Vol. 343, 263-266.
Thompson, D.W. & Mundy J.L., 1987. Three dimensional model matching from an
unconstrained viewpoint. Proc. IEEE Int. Con! on robotics and Automation,
Raleigh, N.C., 208-220.
S. Ullman and R. Basri, 1991. Recognition by Linear Combinations of Models.
IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No. 10,
pp. 992-1006
Ullman, S., 1989. Aligning pictorial descriptions: An approach to object recognition: Cognition, 32(3), 193-254. Also: 1986, A.I. Memo 931, The Artificial
Intelligence Lab., M.I. T ..
Yeates, M.C., 1991. A neural network for computing the pseudo-inverse of a
matrix and application to Kalman filtering. Tech. Report, California Institute
of Technology.
Zipser, D. & Andersen, R.A., 1988. A back-propagation programmed network that
simulates response properties of a subset of posterior parietal neurons. Nature,
331, 679-684.
459
| 515 |@word version:1 middle:1 solid:1 contains:2 existing:2 xand:1 assigning:1 must:1 visible:1 shape:2 designed:1 intelligence:5 core:1 math:1 location:3 simpler:1 constructed:1 supply:1 edelman:2 consists:1 prove:1 fitting:1 recognizable:1 introduce:2 commenting:1 lehtio:2 brain:1 compensating:1 actual:2 considering:1 provided:1 project:1 moreover:1 bounded:1 israel:1 kind:1 developed:1 transformation:8 nj:1 pseudo:3 every:5 zl:1 appear:1 yn:2 encoding:1 ak:1 establishing:1 yd:1 collect:1 suggests:1 limited:1 programmed:1 range:2 weizmann:1 acknowledgment:1 testing:1 orthographic:2 implement:3 projection:3 matching:1 word:3 radial:2 suggest:4 cannot:1 operator:23 storage:1 applying:5 restriction:1 map:1 missing:1 regardless:1 thompson:2 spanned:3 marr:1 handle:4 notion:1 variation:1 coordinate:11 updated:1 construction:1 suppose:1 pt:1 exact:2 us:1 agreement:1 associate:1 recognition:23 located:1 huttenlocher:2 observed:4 bottom:3 transforming:1 agency:1 fischler:2 occluded:1 upon:1 basis:4 represented:5 london:2 artificial:5 outside:1 faugeras:2 apparent:1 ability:1 unseen:1 final:1 associative:5 advantage:2 indication:1 propose:1 kohonen:2 description:1 wired:1 produce:1 object:105 develop:1 soc:1 implemented:6 involves:4 differ:1 direction:1 centered:3 human:1 viewing:2 hillsdale:1 require:1 viewer:2 pl:1 hold:2 sufficiently:2 considered:1 lawrence:1 cognition:1 predict:2 claim:3 mostafa:2 early:1 a2:2 proc:4 always:1 modified:2 rather:4 pn:4 office:1 naval:1 cartography:1 tech:1 inst:1 rigid:14 entire:3 reproduce:1 among:1 orientation:1 ill:1 cube:1 construct:1 future:1 report:2 distinguishes:3 oja:2 composed:1 recognize:4 pictorial:1 familiar:4 occlusion:4 overlayed:1 organization:1 interest:1 alignment:3 sixty:1 accurate:1 edge:1 capable:1 poggio:2 instance:2 obstacle:1 zn:1 subset:1 recognizing:2 erlbaum:1 front:1 stored:5 accomplish:1 contract:1 yl:1 together:2 continuously:1 andersen:2 opposed:1 cognitive:1 conf:2 american:1 koenderink:2 return:3 ullman:13 exclude:1 b2:2 includes:1 coefficient:8 int:2 automation:1 explicitly:2 view:57 lowe:2 lab:1 start:1 recover:2 parallel:2 rbfs:2 square:1 ronen:2 directional:1 identification:2 cybernetics:1 whenever:2 definition:1 against:1 ty:2 acquisition:1 pp:1 involved:1 proof:2 con:1 massachusetts:1 car:1 back:2 follow:3 response:2 done:1 anderson:1 propagation:1 undergoes:1 alb:1 scientific:1 name:1 verify:1 laboratory:3 self:2 bolles:2 motion:1 image:36 novel:8 recently:1 common:1 rotation:2 functional:1 qp:1 b4:2 extend:1 discussed:1 kluwer:1 measurement:1 unfamiliar:1 cambridge:1 ai:1 pew:1 unconstrained:1 longer:1 surface:1 aligning:1 posterior:1 recent:1 yeates:2 store:1 certain:3 selectivity:1 life:1 yi:4 a41:1 seen:5 additional:1 recognized:2 paradigm:1 ii:2 alx:1 aggarwal:2 smooth:2 academic:1 prediction:2 vision:1 represent:4 lpi:1 sometimes:1 pyramid:3 robotics:2 confined:1 cell:2 preserved:1 addition:6 doorn:2 separately:1 fellowship:1 pyramidal:1 simulates:1 call:1 zipser:2 vw:1 intermediate:1 feedforward:1 automated:1 zi:2 architecture:2 whether:1 defense:1 transforms:4 occupy:2 exist:2 notice:1 moses:1 correctly:1 rehovot:1 vol:2 abu:2 four:2 inverse:4 appendix:6 scaling:2 layer:1 followed:2 correspondence:7 quadratic:3 constraint:4 aspect:2 span:2 lond:1 performing:1 optical:1 relatively:2 department:2 according:2 combination:23 mcdonnell:1 smaller:1 describes:1 xlj:1 lp:2 restricted:1 iccv:2 taken:2 previously:2 fed:1 serf:1 available:1 yael:1 apply:1 yare:1 distinguished:1 original:1 assumes:1 top:4 recognizes:1 a4:2 publishing:1 build:2 bl:2 separate:1 thank:1 majority:1 consensus:1 kalman:1 modeled:4 difficult:1 memo:1 ba:3 reliably:1 mundy:2 observation:1 snapshot:1 neuron:1 curved:1 parietal:1 extended:1 communication:1 hinton:1 reproducing:1 arbitrary:3 sharp:1 namely:2 required:3 connection:1 california:1 yll:1 trans:1 beyond:1 suggested:3 below:1 pattern:1 memory:3 difficulty:1 predicting:1 advanced:1 representing:1 scheme:21 technology:3 eye:1 picture:3 created:2 embedded:4 interesting:1 filtering:1 affine:1 sufficient:1 viewpoint:8 pi:4 translation:2 aaba:1 changed:1 summary:1 placed:1 supported:1 hebert:2 raleigh:1 institute:2 template:1 van:2 distributed:1 boundary:1 dimension:1 xn:2 valid:2 contour:1 collection:3 basri:7 chien:2 unreliable:1 silhouette:1 xi:5 postdoctoral:1 continuous:1 nature:2 artificially:1 pk:6 linearly:4 lc:4 position:6 wish:1 xl:2 lie:1 perceptual:1 third:1 learns:1 shimon:1 specific:2 showing:1 undergoing:3 consist:1 exists:1 boston:1 simply:1 appearance:5 visual:5 expressed:4 ordered:1 partially:1 applies:1 aa:2 acm:1 ma:1 bioi:1 viewed:1 consequently:2 man:1 change:1 determined:1 called:1 internal:1 support:1 constructive:1 dept:1 phenomenon:1 handling:1 |
4,587 | 5,150 | The Pareto Regret Frontier
Wouter M. Koolen
Queensland University of Technology
[email protected]
Abstract
Performance guarantees for online learning algorithms typically take the form of
regret bounds, which express that the cumulative loss overhead compared to the
best expert in hindsight is small. In the common case of large but structured expert
sets we typically wish to keep the regret especially small compared to simple
experts, at the cost of modest additional overhead compared to more complex
others. We study which such regret trade-offs can be achieved, and how.
We analyse regret w.r.t. each individual expert as a multi-objective criterion in
the simple but fundamental case of absolute loss. We characterise the achievable
and Pareto optimal trade-offs, and the corresponding optimal strategies for each
sample size both exactly for each finite horizon and asymptotically.
1
Introduction
One of the central problems studied in online learning is prediction with expert advice. In this task
a learner is given access to K strategies, customarily referred to as experts. He needs to make a
sequence of T decisions with the objective of performing as well as the best expert in hindsight.
This goal can be achieved
p with modest overhead, called regret. Typical algorithms, e.g. Hedge [1]
with learning rate ? = 8/T ln K, guarantee
p
LT ? LkT ?
T /2 ln K
for each expert k.
(1)
where LT and LkT are the cumulative losses of the learner and expert k after all T rounds.
Here we take a closer look at that right-hand side. For it is not always desirable to have a uniform
regret bound w.r.t. all experts. Instead, we may want to single out a few special experts and demand
to be really close to them, at the cost of increased overhead compared to the rest. When the number
of experts K is large or infinite, such favouritism even seems unavoidable for non-trivial regret
bounds. The typical proof of the regret bound (1) suggests that the following can be guaranteed as
well. For each choice of probability distribution q on experts, there is an algorithm that guarantees
p
LT ? LkT ?
T /2(? ln q(k))
for each expert k.
(2)
However, it is not immediately obvious how this can be achieved. For example, the Hedge learning
rate ? would need to be tuned differently for different experts. We are only aware of a single
(complex) algorithm that achieves something along these lines [2]. On the flip side, it is also not
obvious that this trade-off profile is optimal.
In this paper we study the Pareto (achievable and non-dominated) regret trade-offs. Let us say that a
candidate trade-off hr1 , . . . , rK i ? RK is T -realisable if there is an algorithm that guarantees
LT ? LkT ? rk
for each expert k.
Which trade-offs are realisable? Among them, which are optimal? And what is the strategy that
witnesses these realisable strategies?
1
1.1
This paper
We resolve the preceding questions for the simplest case of absolute loss, where K = 2. We
first obtain an exact characterisation of the set of realisable trade-offs. We then construct for each
realisable profile a witnessing strategy. We also give a randomised procedure for optimal play that
extends the randomised procedures for balanced regret profiles from [3] and later [4, 5].
We then focus on the relation between priors and regret bounds, to see if the particular form (2)
is achievable, and if so, whether it is optimal. To this end, we characterise the asymptotic Pareto
frontier as T ? ?. We find that the form (2) is indeed achievable but fundamentally sub-optimal.
This is of philosophical interest as it hints that approaching absolute loss by essentially reducing
it to information theory (including Bayesian and Minimum Description Length methods, relative
entropy based optimisation (instance of Mirror Descent), Defensive Forecasting etc.) is lossy.
Finally, we show that our solution for absolute loss equals that of K = 2 experts with bounded linear
loss. We then show how to obtain the bound (1) for K ? 2 experts using a recursive combination
of two-expert predictors. Counter-intuitively, this cannot be achieved with a balanced binary tree of
predictors, but requires the most unbalanced tree possible. Recursive combination with non-uniform
prior weights allows us to obtain (2) (with higher constant) for any prior q.
1.2
Related work
Our work lies in the intersection of two lines of work, and uses ideas from both. On the one hand
there are the game-theoretic (minimax) approaches to prediction with expert advice. In [6] CesaBianchi, Freund, Haussler, Helmbold, Schapire and Warmuth analysed the minimax strategy for
absolute loss with a known time horizon T . In [5] Cesa-Bianchi and Shamir used random walks to
implement it efficiently for K = 2 experts or K ? 2 static experts. A similar analysis was given
by Koolen in [4] with an application to tracking. In [7] Abernethy, Langford and Warmuth obtained
the optimal strategy for absolute loss with experts that issue binary predictions, now controlling
the game complexity by imposing a bound on the loss of the best expert. Then in [3] Abernethy,
Warmuth and Yellin obtained the worst case optimal algorithm for K ? 2 arbitrary experts. More
general budgets were subsequently analysed by Abernethy and Warmuth in [8]. Connections between minimax values and algorithms were studied by Rakhlin, Shamir and Sridharan in [9].
On the other hand there are the approaches that do not treat all experts equally. Freund and Schapire
obtain a non-uniform bound for Hedge in [1] using priors, although they leave the tuning problem
open. The tuning problem was addressed by Hutter and Poland in [2] using two-stages of Follow
the Perturbed Leader. Even-Dar, Kearns, Mansour and Wortman characterise the achievable tradeoffs when we desire especially small regret compared to a fixed average of the experts? losses in
[10]. Their bounds were subsequently tightened by Kapralov and Panigrahy in [11]. An at least
tangentially related problem is to ensure smaller regret when there are several good experts. This
was achieved by Chaudhuri, Freund and Hsu in [12], and later refined by Chernov and Vovk in [13].
2
Setup
The absolute loss game is one of the core decision problems studied in online learning [14]. In it,
the learner sequentially predicts T binary outcomes. Each round t ? {1, . . . , T } the learner assigns
a probability pt ? [0, 1] to the next outcome being a 1, after which the actual outcome xt ? {0, 1} is
revealed, and the learner suffers absolute loss |pt ? xt |. Note that absolute loss equals expected 0/1
loss, that is, the probability of a mistake if a ?hard? prediction in {0, 1} is sampled with bias p on 1.
Realising that the learner cannot avoid high cumulative loss without assumptions on the origin of
the outcomes, the learner?s objective is defined to ensure low cumulative loss compared to a fixed
set of baseline strategies. Meeting this goal ensures that the easier the outcome sequence (i.e. for
which some reference strategy has low loss), the lower the cumulative loss incurred by the learner.
2
4
T=1
T=2
T=3
T=4
T=5
T=6
T=7
T=8
T=9
T=10
regret w.r.t. 1
3
2
3 0
2 0
r1
1
1 0
1
1
2
0
0
1
2
regret w.r.t. 0
3
4
0 0
1
2
3
0
1
2
3
r0
(a) The Pareto trade-off profiles for small T . The
sets GT consist of the points to the north-east of
each curve.
(b) Realisable trade-off profiles for T = 0, 1, 2, 3.
The vertices on the profile for each horizon T are
numbered 0, . . . , T from left to right.
Figure 1: Exact regret trade-off profile
The regret w.r.t. the strategy k ? {0, 1} that always predicts k is given by 1
RTk :=
T
X
|pt ? xt | ? |k ? xt | .
t=1
Minimising regret, defined in this way, is a multi-objective optimisation problem. The classical
approach is to ?scalarise? it into the single objective RT := maxk RTk , that is, to ensure small regret
compared to the best expert in hindsight. In this paper we study the full Pareto trade-off curve.
Definition 1. A candidate trade-off hr0 , r1 i ? R2 is called T -realisable for the T -round absolute
loss game if there is a strategy that keeps the regret w.r.t. each k ? {0, 1} below rk , i.e. if
?p1 ?x1 ? ? ? ?pT ?xT : RT0 ? r0 and RT1 ? r1
where pt ? [0, 1] and xt ? {0, 1} in each round t. We denote the set of all T -realisable pairs by GT .
This definition extends easily to other losses, many experts, fancy reference combinations of experts (e.g. shifts, drift, mixtures), protocols with side information etc. We consider some of these
extension in Section 5, but for now our goal is to keep it as simple as possible.
3
The exact regret trade-off profile
In this section we characterise the set GT ? R2 of T -realisable trade-offs. We show that it is a
convex polygon, that we subsequently characterise by its vertices and edges. We also exhibit the
optimal strategy witnessing each Pareto optimal trade-off and discuss the connection with random
walks. We first present some useful observations about GT .
The linearity of the loss as a function of the prediction already renders GT highly regular.
Lemma 2. The set GT of T -realisable trade-offs is convex for each T .
Proof. Take r A and r B in GT . We need to show that ?r A + (1 ? ?)r B ? GT for all ? ? [0, 1]. Let
A and B be strategies witnessing the T -realisability of these points. Now consider the strategy that
B
in each round t plays the mixture ?pA
t + (1 ? ?)pt . As the absolute loss is linear in the prediction,
B
+(1??)L
?
LkT +?rkA +(1??)rkB for each k ? {0, 1}.
this strategy guarantees LT = ?LA
T
T
Guarantees violated early cannot be restored later.
Lemma 3. A strategy that guarantees RTk ? rk must maintain Rtk ? rk for all 0 ? t ? T .
1
One could define the regret RTk for all static reference probabilities k ? [0, 1], but as the loss is minimised
by either k = 0 or k = 1, we immediately restrict to only comparing against these two.
3
Proof. Suppose toward contradiction that Rtk > rk at some t < T . An adversary may set all
xt+1 . . . xT to k to fix LkT = Lkt . As LT ? Lt , we have RTk = LT ?LkT ? Lt ?Lkt = Rtk > rk .
The two extreme trade-offs h0, T i and hT, 0i are Pareto optimal.
Lemma 4. Fix horizon T and r1 ? R. The candidate profile h0, r1 i is T -realisable iff r1 ? T .
Proof. The static strategy pt = 0 witnesses h0, T i ? GT for every horizon T . To ensure RT1 < T ,
any strategy will have to play pt > 0 at some time t ? T . But then it cannot maintain Rt0 = 0.
It is also intuitive that maintaining low regret becomes progressively harder with T .
Lemma 5. G0 ? G1 ? . . .
Proof. Lemma 3 establishes ?, whereas Lemma 4 establishes 6=.
We now come to our first main result, the characterisation of GT . We will directly characterise its
south-west frontier, that is, the set of Pareto optimal trade-offs. These frontiers are graphed up to
T = 10 in Figure 1a. The vertex numbering we introduce below is illustrated by Figure 1b.
Theorem 6. The Pareto frontier of GT is the piece-wise linear curve through the T + 1 vertices
i
X
j?T T ? j ? 1
j2
.
fT (i), fT (T ? i) for i ? {0, . . . , T }
where
fT (i) :=
T ?i?1
j=0
Moreover, for T > 0 the optimal strategy at vertex i assigns to the outcome x = 1 the probability
fT ?1 (i) ? fT ?1 (i ? 1)
pT (0) := 0, pT (T ) := 1, and pT (i) :=
for 0 < i < T,
2
and the optimal probability interpolates linearly in between consecutive vertices.
Proof. By induction on T . We first consider the base case T = 0. By Definition 1
G0 = hr0 , r1 i r0 ? 0 and r1 ? 0
is the positive orthant, which has the origin as its single Pareto optimal vertex, and indeed
hf0 (0), f0 (0)i = h0, 0i. We now turn to T ? 1. Again by Definition 1 hr0 , r1 i ? GT if
?p ? [0, 1]?x ? {0, 1} : r0 ? |p ? x| + |0 ? x|, r1 ? |p ? x| + |1 ? x| ? GT ?1 ,
that is if
?p ? [0, 1] : r0 ? p, r1 ? p + 1 ? GT ?1 and r0 + p, r1 + p ? 1 ? GT ?1 .
By the induction hypothesis we know that the south-west frontier curve for GT ?1 is piecewise linear.
We will characterise GT via its frontier as well. For each r0 , let r1 (r0 ) and p(r0 ) denote the value
and minimiser of the optimisation problem
min r1 both hr0 , r1 i ? hp, p ? 1i ? GT ?1 .
p?[0,1]
We also refer to hr0 , r1 (r0 )i ? hp(r0 ), p(r0 ) ? 1i as the rear(?) and front(+) contact points. For
r0 = 0 we find r1 (0) = T , with witness p(0) = 0 and rear/front contact points h0, T + 1i and
h0, T ? 1i, and for r0 = T we find r1 (T ) = 0 with witness p(T ) = 1 and rear/front contact
points hT ? 1, 0i and hT + 1, 0i. It remains to consider the intermediate trajectory of r1 (r0 ) as
r0 runs from 0 to T . Initially at r0 = 0 the rear contact point lies on the edge of GT ?1 entering
vertex i = 0 of GT ?1 , while the front contact point lies on the edge emanating from that same
vertex. So if we increase r0 slightly, the contact points will slide along their respective lines. By
Lemma 11 (supplementary material), r1 (r0 ) will trace along a straight line as a result. Once we
increase r0 enough, both the rear and front contact point will hit the vertex at the end of their
edges simultaneously (a fortunate fact that greatly simplifies our analysis), as shown in Lemma 12
(supplementary material). The contact points then transition to tracing the next pair of edges of
GT ?1 . At this point r0 the slope of r1 (r0 ) changes, and we have discovered a vertex of GT .
Given that at each such transition hr0 , r1 (r0 )i is the midpoint between both contact points, this
implies that all midpoints between successive vertices of GT ?1 are vertices of GT . And in addition,
there are the two boundary vertices h0, T i and hT, 0i.
4
T=10000
asymptotically realisable
sqrt min-log-prior
normalised regret w.r.t. 1
normalised regret w.r.t. 1
2
1.5
1
0.5
0
0
0.5
1
1.5
normalised regret w.r.t. 0
10
1
1e-50
2
(a) Normal scale
T=10000
asymptotically realisable
sqrt min-log-prior
1e-40
1e-30
1e-20
1e-10
normalised regret w.r.t. 0
1
(b) Log-log scale to highlight the tail behaviour
Figure 2: Pareto frontier of G, the asymptotically realisable?trade-off rates. There is no noticeable
difference
normalised regret
trade-off profile GT / T for T = 10000. We also graph the
p with thep
curve
? ln(q), ? ln(1 ? q) for all q ? [0, 1].
3.1
The optimal strategy and random walks
In this section we describe how to follow the optimal strategy. First suppose we desire to witness a
T -realisable trade-off that happens to be a vertex of GT , say vertex i at hfT (i), fT (T ? i)i. With T
rounds remaining and in state i, the strategy predicts with pT (i). Then the outcome x ? {0, 1} is
revealed. If x = 0, we need to witness in the remaining T ? 1 rounds the trade-off hfT (i), fT (T ?
i)i ? hpT (i), pT (i) + 1i = hfT ?1 (i ? 1), fT ?1 (T ? 1)i, which is vertex i ? 1 of GT ?1 . So the
strategy transition to state i ? 1. Similarly upon x = 1 we update our internal state to i. If the state
ever either exceeds the number of rounds remaining or goes negative we simply clamp it.
Second, if we desire to witness a T -realisable trade-off that is a convex combination of successive
vertices, we simply follow the mixture strategy as constructed in Lemma 2. Third, if we desire to
witness a sub-optimal element of GT , we may follow any strategy that witnesses a Pareto optimal
dominating trade-off.
The probability p issued by the algorithm is sometimes used to randomly sample a ?hard prediction?
from {0, 1}. The expression |p ? x| then denotes the expected loss, which equals the probability
of making a mistake. We present, following [3], a random-walk based method to sample a 1 with
probability pT (i). Our random walk starts in state hT, ii. In each round it transitions from state hT, ii
to either state hT ? 1, ii or state hT ? 1, i ? 1i with equal probability. It is stopped when the state
hT, ii becomes extreme in the sense that i ? {0, T }. Note that this process always terminates. Then
the probability that this process is stopped with i = T equals pT (i). In our case of absolute loss,
evaluating pT (i) and performing the random walk both take T units of time. The random walks
considered in [3] for K ? 2 experts still take T steps, whereas direct evaluation of the optimal
strategy scales rather badly with K.
4
The asymptotic regret rate trade-off profile
In the previous section we obtained for each time horizon T a combinatorial characterisation of the
set GT of T -realisable trade-offs. In this section we show that properly normalised Pareto frontiers
for increasing T are better and better approximations of a certain intrinsic smooth limit curve. We
obtain a formula for this curve, and use it to study the question of realisability for large T .
Definition 7. Let us define the set G of asymptotically realisable regret rate trade-offs by
G :=
GT
lim ? .
T
T ??
Despite the disappearance of the horizon T from the notation, the set G still captures the trade-offs
that can be achieved with prior knowledge of T . Each achievable regret rate trade-off h?0 , ?1 i ? G
5
may
? be witnessed by a different strategy for each T . This is fine for our intended interpretation of
T G as a proxy for GT . We briefly mention horizon-free algorithms at the end of this section.
p
p
The literature [2] suggests that, for some constant c, h ?c ln(q), ?c ln(1 ? q)i should be asymptotically realisable for each q ? [0, 1]. We indeed confirm this below, and determine the optimal
constant to be c = 1. We then discuss the philosophical implications of the quality of this bound.
We now come to our second main result, the characterisation of the asymptotically realisable tradeoff rates. The Pareto frontier is graphed in Figure 2 both on normal axes for comparison to Figure
? 1a,
and on a log-log scale to show its tails. Note the remarkable quality of approximation to GT / T .
Theorem 8. The Pareto frontier of the set G of asymptotically realisable trade-offs is the curve
f (u), f (?u)
Ru
for u ? R,
where
f (u) := u erf
?
2
e?2u
2u + ?
+ u,
2?
2
e?v dv is the error function. Moreover, the optimal strategy converges to
?
1 ? erf
2u
p(u) =
.
2
?
Proof. We calculate the limit of normalised Pareto frontiers at vertex i = T /2 + u T , and obtain
?
?
T /2+u T
X
fT T /2 + u T
T ?j?1
1
j?T
?
?
j2
= lim ?
lim
T ??
T ??
T /2 ? u T ? 1
T
T
j=0
Z T /2+u?T
T ?j?1
1
j?T
?
= lim ?
j2
dj
T ??
T /2 ? u T ? 1
T 0
?
Z u
?
T ?1 ?
(u?v) T ?T T ? (u ? v)
?
= lim
T dv
(u
?
v)2
T ?? ??T /2
T /2 ? u T ? 1
?
Z u
?
T ?1 ?
(u?v) T ?T T ? (u ? v)
?
=
(u ? v) lim 2
T dv
T ??
T /2 ? u T ? 1
??
Z u
2
1
? e?2u2
e? 2 (u+v)
=
(u ? v) ?
dv
= u erf 2u + ?
+u
2?
2?
??
and erf(u) =
?2
?
0
In the first step we replace the sum by an integral. We can do this as the summand is continuous in j,
and the approximation error is multiplied by?
2?T and hence goes to 0 with T . In the second step we
perform the variable substitution v = u ? j/ T . We then exchange limit and integral, subsequently
evaluate the limit, and in the final step we evaluate the integral.
To obtain the optimal strategy, we observe the following relation between the slope of the Pareto
curve and the optimal strategy for each horizon T . Let g and h denote the Pareto curves at times T
and T + 1 as a function of r0 . The optimal strategy p for T + 1 at r0 satisfied the system of equations
h(r0 ) + p ? 1 = g(u + p)
h(r0 ) ? p + 1 = g(u ? p)
to which the solution satisfies
1
g(r0 + p) ? g(r0 ? p)
dg(r0 )
1? =
?
,
p
2p
dr0
so that
p ?
1
1?
dg(r0 )
dr0
.
Since slope is invariant under normalisation, this relation between slope and optimal strategy becomes exact as T tends to infinity, and we find
?
1 ? erf
2u
1
1
=
=
.
p(u) =
df (r0 (u))
f 0 (u)
2
1+
1+ 0
dr0 (u)
f (?u)
We believe this last argument is more insightful than a direct evaluation of the limit.
6
4.1
Square root of min log prior
Results for Hedge suggest ? modulo a daunting tuning problem ? that a trade-off featuring square
root negative log prior akin to (2) should be realisable. We first show that this is indeed the case, we
then determine the optimal leading constant and we finally discuss its sub-optimality.
p
p
Theorem 9. The parametric curve
?c ln(q), ?c ln(1 ? q) for q ? [0, 1] is contained in G
(i.e. asymptotically realisable) iff c ? 1.
Proof. By Theorem 8, the frontier of G is of the form hf (u), f (?u)i. Our argument revolves around
the tails (extreme u) of G. For large u 0, we find that f (u) ? 2u. For small u 0, we find that
?2u2
f (u) ? 4e?2?u2 . This is obtained by a 3rd order Taylor series expansion around u = ??. We need
to go to 3rd order since all prior orders evaluate to 0. The additive approximation error is of order
2
e?2u u?4 , which is negligible. So for large r0 0, the least realisable r1 is approximately
r1 ?
e?
2
r0
2
?2 ln r0
?
.
(3)
2?
p
p
With the candidate relation r0 = ?c ln(q) and r1 = ?c ln(1 ? q), still for large r0 0 so that
q is small and ? ln(1 ? q) ? q, we would instead find least realisable r1 approximately equal to
r1 ?
?
2
r0
ce? 2c .
(4)
The candidate tail (4) must be at least the actual tail (3) for all large r0 . The minimal c for which
this holds is c = 1. The graphs of Figure 2 illustrate this tail behaviour for c = 1, and at the same
time verify that there are no violations for moderate u.
Even though the sqrt-min-log-prior
trade-off is realisable, we see that its tail (4) exceeds the actual
?
tail (3) by the factor r02 2?, which gets progressively worse with the extremity of the tail r0 . Figure 2a shows that its ?
behaviour for moderate hr0 , r1 i is also
? not brilliant. For example it gives us a
symmetric bound of ln 2 ? 0.833, whereas f (0) = 1/ 2? ? 0.399 is optimal.
For certain log loss games, each Pareto regret trade-off is witnessed uniquely by the Bayesian mixture of expert predictions w.r.t. a certain non-uniform prior and vice versa (not shown). In this sense
the Bayesian method is the ideal answer to data compression/investment/gambling. Be that as it
may, we conclude that the world of absolute loss is not information theory: simply putting a prior
is not the definitive answer to non-uniform guarantees. It is a useful intuition that leads to the convenient sqrt-min-log-prior bounds. We hope that our results contribute to obtaining tighter bounds
that remain manageable.
4.2
The asymptotic algorithm
The previous theorem immediately suggests an approximate algorithm
for finite horizon T . To
?
approximately witness hr0 , r1 i, find the value of u for which T hf (u), f (?u)i is closest to it.
Then play p(u). This will not guarantee hr0 , r1 i exactly, but intuitively it will be close. We leave
analysing this idea to the journal version. Conversely, by taking the limit of the game protocol,
which involves the absolute loss function, we might obtain an interesting protocol and ?asymptotic?
loss function2 , for which u is the natural state, p(u) is the optimal strategy, and u is updated in
a certain way. Investigating such questions
? will probably lead to interesting insights, for example
horizon-free strategies that maintain RTk / T ? ?k for all T simultaneously. Again this will be
pursued for the journal version.
2
We have seen an instance of this before. When the Hedge algorithm with learning rate ? plays weights w
and faces loss vector `, its dot loss is given by wT `. Now consider the same loss vector handed out in identical
pieces `/n over the course of n trials, during
P which the weights w update as usual. In the limit of n ? ?, the
resulting loss becomes the mix loss ? ?1 ln k w(k)e??`k .
7
5
5.1
Extension
Beyond absolute loss
In this section we consider the general setting with K = 2 expert, that we still refer to as 0 and
1. Here the learner plays p ? [0, 1] which is now interpreted as the weight allocated to expert 1,
2
the adversary chooses a loss vector ` = h`0 , `1 i ? [0, 1] , and the learner incurs dot loss given by
(1 ? p)`0 + p`1 . The regrets are now redefined as follows
RTk :=
T
X
pt `1t + (1 ? pt )`0t ?
t=1
T
X
`kt
for each expert k ? {0, 1}.
t=1
Theorem 10. The T -realisable trade-offs for absolute loss and K = 2 expert dot loss coincide.
Proof. By induction on T . The loss is irrelevant in the base case T = 0. For T > 0, a trade-off
hr0 , r1 i is T -realisable for dot loss if
2
?p ? [0, 1]?` ? [0, 1] : hr0 + p`1 + (1 ? p)`0 ? `0 , r1 + p`1 + (1 ? p)`0 ? `1 i ? GT ?1
that is if
?p ? [0, 1]?? ? [?1, 1] : hr0 ? p?, r1 + (1 ? p)?i ? GT ?1 .
We recover the absolute loss case by restricting ? to {?1, 1}. These requirements are equivalent
since GT is convex by Lemma 2.
5.2
More than 2 experts
In the general experts problem
? we compete with K instead of 2 experts. We now argue that an algorithm guaranteeing RTk ? cT ln K w.r.t. each expert k can be obtained. The intuitive approach,
combining the K experts in a balanced binary tree of two-expert predictors,
does not achieve this
p
goal: each internal
node
contributes
the
optimal
symmetric
regret
of
T
/(2?).
This accumulates
?
to RTk ? ln K cT , where the log sits outside the square root.
?
Counter-intuitively, the maximally unbalanced binary tree does result in a ln K factor when the
internal nodes are properly skewed. At each level we combine K experts one-vs-all, permitting large
regret w.r.t. the first expert but tiny regret w.r.t. the recursive combination of the remaining K ? 1
experts. The argument can be found in Appendix A.1. The same argument shows that, for any prior
q on k = 1, 2, . . ., combining
p the expert with the smallest prior with the recursive combination of
the rest guarantees regret ?cT ln q(k) w.r.t. each expert k.
6
Conclusion
We studied asymmetric regret guarantees for the fundamental online learning setting of the absolute
loss game. We obtained exactly the achievable skewed regret guarantees, and the corresponding
optimal
p We then studied the profile in the limit of large T . We conclude that the expected
? p algorithm.
T h ? ln(q), ? ln(1 ? q)i trade-off is achievable for any prior probability q ? [0, 1], but that it
is not tight. We then showed how our results transfer from absolute loss to general linear losses, and
to more than two experts.
Major next steps are to determine the optimalqtrade-offs for K > 2 experts, to replace our traditional
q
p
?
k
?
Lk
T (T ?LT )
T budget by modern variants LkT [15],
[16], Varmax
[17], D? [18], ?T [19]
T
T
?
etc. and to find the Pareto frontier for horizon-free strategies maintaining RTk ? ?k T at any T .
Acknowledgements
This work benefited substantially from discussions with Peter Gr?unwald.
8
References
[1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
[2] Marcus Hutter and Jan Poland. Adaptive online prediction by following the perturbed leader.
Journal of Machine Learning Research, 6:639?660, 2005.
[3] Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. When random play is optimal against
an adversary. In Rocco A. Servedio and Tong Zhang, editors, COLT, pages 437?446. Omnipress, 2008.
[4] Wouter M. Koolen. Combining Strategies Efficiently: High-quality Decisions from Conflicting
Advice. PhD thesis, Institute of Logic, Language and Computation (ILLC), University of
Amsterdam, January 2011.
[5] Nicol`o Cesa-Bianchi and Ohad Shamir. Efficient online learning via randomized rounding.
In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 24, pages 343?351, 2011.
[6] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire,
and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427?485,
1997.
[7] Jacob Abernethy, John Langford, and Manfred K Warmuth. Continuous experts and the Binning algorithm. In Learning Theory, pages 544?558. Springer, 2006.
[8] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In
J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances
in Neural Information Processing Systems 23, pages 1?9, 2010.
[9] Sasha Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize : From value to
algorithms. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems 25, pages 2150?2158, 2012.
[10] Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best
vs. regret to the average. Machine Learning, 72(1-2):21?37, 2008.
[11] Michael Kapralov and Rina Panigrahy. Prediction strategies without loss. In J. Shawe-Taylor,
R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural
Information Processing Systems 24, pages 828?836, 2011.
[12] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. A parameter-free hedging algorithm. In
Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in
Neural Information Processing Systems 22, pages 297?305, 2009.
[13] Alexey V. Chernov and Vladimir Vovk. Prediction with advice of unknown number of experts.
In Peter Gr?unwald and Peter Spirtes, editors, UAI, pages 117?125. AUAI Press, 2010.
[14] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[15] Peter Auer, Nicol`o Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line
learning algorithms. Journal of Computer and System Sciences, 64(1):48?75, 2002.
[16] Nicol`o Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for
prediction with expert advice. Machine Learning, 66(2-3):321?352, 2007.
[17] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2-3):165?188, 2010.
[18] Chao-Kai Chiang, Tianbao Yangand Chia-Jung Leeand Mehrdad Mahdaviand Chi-Jen Luand Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In Proceedings
of the 25th Annual Conference on Learning Theory, number 23 in JMLR W&CP, pages 6.1 ?
6.20, June 2012.
[19] Steven de Rooij, Tim van Erven, Peter D. Gr?unwald, and Wouter M. Koolen. Follow the leader
if you can, Hedge if you must. ArXiv, 1301.0534, January 2013.
9
| 5150 |@word trial:1 briefly:1 manageable:1 achievable:8 seems:1 compression:1 version:2 open:1 gradual:1 queensland:1 jacob:3 incurs:1 mention:1 harder:1 substitution:1 series:1 daniel:1 tuned:1 erven:1 comparing:1 analysed:2 must:3 john:1 additive:1 progressively:2 update:2 v:2 pursued:1 warmuth:8 core:1 manfred:4 chiang:1 boosting:1 contribute:1 node:2 successive:2 sits:1 zhang:1 along:3 constructed:1 direct:2 overhead:4 combine:1 introduce:1 expected:3 indeed:4 p1:1 multi:2 chi:1 resolve:1 actual:3 increasing:1 becomes:4 bounded:2 linearity:1 moreover:2 notation:1 what:1 interpreted:1 substantially:1 hindsight:3 guarantee:12 certainty:1 every:1 auai:1 exactly:3 hit:1 unit:1 positive:1 negligible:1 before:1 treat:1 tends:1 mistake:2 limit:8 despite:1 accumulates:1 extremity:1 approximately:3 lugosi:1 might:1 alexey:1 au:1 studied:5 shenghuo:1 suggests:3 conversely:1 revolves:1 recursive:4 regret:43 implement:1 investment:1 procedure:2 jan:1 convenient:1 regular:1 numbered:1 suggest:1 get:1 cannot:4 close:2 dr0:3 function2:1 equivalent:1 go:3 williams:2 rt0:2 kale:1 convex:4 tianbao:1 defensive:1 immediately:3 helmbold:2 assigns:2 contradiction:1 insight:1 haussler:2 variation:2 updated:1 shamir:4 play:7 controlling:1 pt:18 exact:4 suppose:2 modulo:1 us:1 yishay:2 hypothesis:1 origin:2 pa:1 element:1 asymmetric:1 predicts:3 binning:1 ft:9 steven:1 capture:1 worst:1 calculate:1 culotta:2 ensures:1 rina:1 counter:2 trade:36 balanced:3 intuition:1 complexity:1 tight:1 upon:1 learner:10 easily:1 differently:1 polygon:1 describe:1 emanating:1 zemel:3 outcome:7 refined:1 abernethy:6 h0:7 outside:1 supplementary:2 dominating:1 elad:1 say:2 relax:1 kai:1 satyen:1 erf:5 g1:1 analyse:1 final:1 online:7 sequence:2 clamp:1 j2:3 combining:3 iff:2 chaudhuri:2 achieve:1 description:1 intuitive:2 hf0:1 r02:1 requirement:1 r1:33 guaranteeing:1 leave:2 converges:1 tim:1 illustrate:1 noticeable:1 involves:1 come:2 implies:1 subsequently:4 material:2 exchange:1 behaviour:3 fix:2 generalization:1 really:1 tighter:1 frontier:14 extension:2 rong:1 hold:1 around:2 considered:1 normal:2 major:1 achieves:1 early:1 consecutive:1 smallest:1 combinatorial:1 vice:1 establishes:2 hope:1 offs:15 always:3 rather:1 avoid:1 claudio:1 ax:1 focus:1 june:1 properly:2 greatly:1 baseline:1 sense:2 rear:5 typically:2 abor:1 initially:1 relation:4 issue:1 among:1 colt:1 special:1 equal:6 aware:1 construct:1 once:1 identical:1 look:1 others:1 fundamentally:1 hint:1 few:1 piecewise:1 summand:1 randomly:1 modern:1 dg:2 simultaneously:2 individual:1 intended:1 maintain:3 karthik:1 interest:1 normalisation:1 wouter:4 highly:1 evaluation:2 joel:1 violation:1 mixture:4 extreme:3 hpt:1 implication:1 kt:1 edge:5 closer:1 integral:3 minimiser:1 respective:1 modest:2 ohad:2 tree:4 stoltz:1 taylor:4 walk:7 minimal:1 stopped:2 hutter:2 increased:1 instance:2 witnessed:2 handed:1 yoav:3 cost:3 vertex:19 uniform:5 predictor:3 wortman:2 rounding:1 gr:3 front:5 answer:2 perturbed:2 chooses:1 confident:1 fundamental:2 randomized:1 off:22 minimised:1 michael:2 thesis:1 again:2 satisfied:1 cesa:6 unavoidable:1 central:1 worse:1 expert:54 leading:1 realisable:28 de:1 north:1 piece:2 later:3 root:3 hedging:1 eyal:1 hazan:1 kapralov:2 start:1 hf:2 recover:1 slope:4 square:3 tangentially:1 efficiently:2 bayesian:3 trajectory:1 straight:1 sqrt:4 suffers:1 definition:5 against:3 servedio:1 obvious:2 proof:9 static:3 hsu:2 sampled:1 lim:6 knowledge:1 auer:1 varmax:1 higher:1 follow:5 maximally:1 daunting:1 improved:1 though:1 stage:1 langford:2 hand:3 quality:3 graphed:2 lossy:1 believe:1 verify:1 hence:1 entering:1 symmetric:2 spirtes:1 illustrated:1 round:9 game:9 during:1 uniquely:1 skewed:2 self:1 criterion:1 theoretic:2 cp:1 omnipress:1 wise:1 brilliant:1 common:1 koolen:5 tail:9 he:1 interpretation:1 rkb:1 refer:2 versa:1 imposing:1 cambridge:1 tuning:3 rd:2 hp:2 similarly:1 language:1 dj:1 dot:4 shawe:3 access:1 f0:1 etc:3 gt:35 base:2 something:1 closest:1 showed:1 moderate:2 irrelevant:1 issued:1 certain:4 binary:5 meeting:1 seen:1 minimum:1 additional:1 gentile:1 preceding:1 r0:40 determine:3 cesabianchi:1 ii:4 full:1 desirable:1 mix:1 chernov:2 exceeds:2 smooth:1 minimising:1 chia:1 equally:1 permitting:1 prediction:13 variant:1 essentially:1 optimisation:3 df:1 arxiv:1 sometimes:1 achieved:6 whereas:3 want:1 addition:1 fine:1 addressed:1 hft:3 rka:1 allocated:1 rest:2 probably:1 south:2 lafferty:2 sridharan:2 extracting:1 ideal:1 revealed:2 intermediate:1 enough:1 bengio:1 approaching:1 restrict:1 idea:2 simplifies:1 tradeoff:2 shift:1 whether:1 expression:1 bartlett:3 hr1:1 forecasting:1 akin:1 render:1 peter:5 interpolates:1 dar:2 useful:2 characterise:7 slide:1 simplest:1 schapire:4 fancy:1 rtk:13 express:1 putting:1 rooij:1 characterisation:4 budgeted:1 ce:1 ht:9 asymptotically:9 graph:2 sum:1 yellin:2 run:1 compete:1 uncertainty:1 you:2 extends:2 decision:4 appendix:1 bound:14 ct:3 guaranteed:1 annual:1 badly:1 infinity:1 dominated:1 argument:4 min:6 optimality:1 performing:2 structured:1 numbering:1 combination:6 smaller:1 slightly:1 terminates:1 remain:1 making:1 happens:1 intuitively:3 dv:4 invariant:1 ln:21 equation:1 remains:1 randomised:2 discus:3 turn:1 jennifer:1 know:1 flip:1 end:3 multiplied:1 observe:1 weinberger:3 denotes:1 remaining:4 ensure:4 maintaining:2 realisability:2 especially:2 classical:1 contact:9 objective:5 g0:2 question:3 already:1 restored:1 strategy:37 parametric:1 rt:1 disappearance:1 usual:1 traditional:1 rocco:1 exhibit:1 randomize:1 mehrdad:1 argue:1 trivial:1 toward:1 induction:3 marcus:1 panigrahy:2 ru:1 length:1 vladimir:1 setup:1 robert:2 trace:1 negative:2 redefined:1 unknown:1 perform:1 bianchi:6 gilles:1 observation:1 finite:2 descent:1 orthant:1 jin:1 january:2 maxk:1 witness:10 ever:1 mansour:3 discovered:1 arbitrary:1 drift:1 david:2 pair:2 connection:2 philosophical:2 conflicting:1 beyond:1 adversary:4 below:3 rt1:2 including:1 natural:1 zhu:1 minimax:3 technology:1 lk:1 poland:2 prior:17 literature:1 acknowledgement:1 chao:1 nicol:5 asymptotic:4 relative:1 freund:6 loss:47 highlight:1 lkt:10 interesting:2 remarkable:1 incurred:1 proxy:1 tightened:1 editor:7 pareto:21 tiny:1 featuring:1 course:1 jung:1 last:1 free:4 side:3 bias:1 normalised:7 burges:1 institute:1 taking:1 face:1 absolute:19 midpoint:2 tracing:1 van:1 curve:11 boundary:1 transition:4 cumulative:5 evaluating:1 world:1 adaptive:2 coincide:1 approximate:1 keep:3 confirm:1 logic:1 sequentially:1 investigating:1 uai:1 conclude:2 leader:3 thep:1 continuous:2 transfer:1 obtaining:1 contributes:1 schuurmans:1 expansion:1 bottou:1 complex:2 protocol:3 main:2 linearly:1 definitive:1 profile:12 customarily:1 repeated:1 x1:1 qut:1 advice:6 referred:1 realising:1 west:2 gambling:1 benefited:1 tong:1 sub:3 pereira:3 wish:1 sasha:1 fortunate:1 candidate:5 lie:3 jmlr:1 third:1 rk:8 theorem:6 formula:1 xt:8 jen:1 insightful:1 rakhlin:2 r2:2 consist:1 intrinsic:1 restricting:1 kamalika:1 mirror:1 phd:1 budget:2 horizon:12 demand:1 easier:1 entropy:1 intersection:1 lt:10 simply:3 desire:4 contained:1 amsterdam:1 tracking:1 u2:3 springer:1 satisfies:1 acm:1 hedge:6 goal:4 replace:2 hard:2 change:1 analysing:1 typical:2 infinite:1 reducing:1 vovk:2 wt:1 kearns:2 lemma:10 called:2 la:1 east:1 unwald:3 illc:1 internal:3 witnessing:3 unbalanced:2 violated:1 evaluate:3 |
4,588 | 5,151 | Online Learning with Switching Costs and Other
Adaptive Adversaries
Nicol`o Cesa-Bianchi
Universit`a degli Studi di Milano
Italy
Ofer Dekel
Microsoft Research
USA
Ohad Shamir
Microsoft Research
and the Weizmann Institute
Abstract
We study the power of different types of adaptive (nonoblivious) adversaries in
the setting of prediction with expert advice, under both full-information and bandit feedback. We measure the player?s performance using a new notion of regret,
also known as policy regret, which better captures the adversary?s adaptiveness
to the player?s behavior. In a setting where losses are allowed to drift, we characterize ?in a nearly complete manner? the power of adaptive adversaries with
bounded memories and switching costs. In particular, we show that with switch? 2/3
ing costs, the attainable rate with bandit
? feedback is ?(T ). Interestingly, this
rate is signi?cantly worse than the ?( T ) rate attainable with switching costs in
the full-information case. Via a novel reduction from experts to bandits, we also
? 2/3 ) regret even in the full
show that a bounded memory adversary can force ?(T
information case, proving that switching costs are easier to control than bounded
memory adversaries. Our lower bounds rely on a new stochastic adversary strategy that generates loss processes with strong dependencies.
1
Introduction
An important instance of the framework of prediction with expert advice ?see, e.g., [8]? is de?ned
as the following repeated game, between a randomized player with a ?nite and ?xed set of available
actions and an adversary. At the beginning of each round of the game, the adversary assigns a loss to
each action. Next, the player de?nes a probability distribution over the actions, draws an action from
this distribution, and suffers the loss associated with that action. The player?s goal is to accumulate
loss at the smallest possible rate, as the game progresses. Two versions of this game are typically
considered: in the full-information feedback version, at the end of each round, the player observes
the adversary?s assignment of loss values to each action. In the bandit feedback version, the player
only observes the loss associated with his chosen action, but not the loss values of other actions.
We assume that the adversary is adaptive (also called nonoblivious by [8] or reactive by [16]), which
means that the adversary chooses the loss values on round t based on the player?s actions on rounds
1 . . . t ? 1. We also assume that the adversary is deterministic and has unlimited computational
power. These assumptions imply that the adversary can specify his entire strategy before the game
begins. In other words, the adversary can perform all of the calculations needed to specify, in
advance, how he plans to react on each round to any sequence of actions chosen by the player.
More formally, let A denote the ?nite set of actions and let Xt denote the player?s random action on
round t. We adopt the notation X1:t as shorthand for the sequence X1 . . . Xt . We assume that the
adversary de?nes, in advance, a sequence of history-dependent loss functions f1 , f2 , . . .. The input
to each loss function ft is the entire history of the player?s actions so far, therefore the player?s loss
on round t is ft (X1:t ). Note that the player doesn?t observe the functions ft , only the losses that
result from his past actions. Speci?cally, in the bandit feedback model, the player observes ft (X1:t )
on round t, whereas in the full-information model, the player observes ft (X1:t?1 , x) for all x ? A.
1
On any round T , we evaluate the player?s performance so far using the notion of regret, which
compares his cumulative loss on the ?rst T rounds to the cumulative loss of the best ?xed action in
hindsight. Formally, the player?s regret on round T is de?ned as
RT =
T
?
t=1
ft (X1:t ) ? min
x?A
T
?
ft (x . . . x) .
(1)
t=1
RT is a random variable, as it depends on the randomized action sequence X1:t . Therefore, we also
consider the expected regret E[RT ]. This de?nition is the same as the one used in [18] and [3] (in
the latter, it is called policy regret), but differs from the more common de?nition of expected regret
?
? T
T
?
?
ft (X1:t ) ? min
ft (X1:t?1 , x) .
(2)
E
t=1
x?A
t=1
The de?nition in Eq. (2) is more common in the literature (e.g., [4, 17, 10, 16]), but is clearly inadequate for measuring a player?s performance against an adaptive adversary. Indeed, if the adversary is
adaptive, the quantity ft (X1:t?1 , x)is hardly interpretable ?see [3] for a more detailed discussion.
In general, we seek algorithms for which E[RT ] can be bounded by a sublinear function of T ,
implying that the per-round expected regret, E[RT ]/T , tends to zero. Unfortunately, [3] shows that
arbitrary adaptive adversaries can easily force the regret to grow linearly. Thus, we need to focus on
(reasonably) weaker adversaries, which have constraints on the loss functions they can generate.
The weakest adversary we discuss is the oblivious adversary, which determines the loss on round t
based only on the current action Xt . In other words, this adversary is oblivious to the player?s past
actions. Formally, the oblivious adversary is constrained to choose a sequence of loss functions that
satis?es ?t, ?x1:t ? At , and ?x?1:t?1 ? At?1 ,
ft (x1:t ) = ft (x?1:t?1 , xt ) .
(3)
The majority of previous work in online learning focuses on oblivious adversaries. When dealing
with oblivious adversaries, we denote the loss function by ?t and omit the ?rst t ? 1 arguments. With
this notation, the loss at time t is simply written as ?t (Xt ).
For example, imagine an investor that invests in a single stock at a time. On each trading day he
invests in one stock and suffers losses accordingly. In this example, the investor is the player and the
stock market is the adversary. If the investment amount is small, the investor?s actions will have no
measurable effect on the market, so the market is oblivious to the investor?s actions. Also note that
this example relates to the full-information feedback version of the game, as the investor can see the
performance of each stock at the end of each trading day.
A stronger adversary is the oblivious adversary with switching costs. This adversary is similar to the
oblivious adversary de?ned above, but charges the player an additional switching cost of 1 whenever
Xt ?= Xt?1 . More formally, this adversary de?nes his sequence of loss functions in two steps: ?rst
he chooses an oblivious sequence of loss functions, ?1 , ?2 . . ., which satis?es the constraint in Eq. (3).
Then, he sets f1 (x) = ?1 (x), and
? t ? 2, ft (x1:t ) = ?t (xt ) + I{xt ?=xt?1 } .
(4)
This is a very natural setting. For example, let us consider again the single-stock investor, but now
assume that each trade has a ?xed commission cost. If the investor keeps his position in a stock for
multiple trading days, he is exempt from any additional fees, but when he sells one stock and buys
another, he incurs a ?xed commission. More generally, this setting (or simple generalizations of it)
allows us to capture any situation where choosing a different action involves a costly change of state.
In the paper, we will also discuss a special case of this adversary, where the loss function ?t (x) for
each action is sampled i.i.d. from a ?xed distribution.
The switching costs adversary de?nes ft to be a function of Xt and Xt?1 , and is therefore a special
case of a more general adversary called an adaptive adversary with a memory of 1. This adversary
is constrained to choose loss functions that satisfy ?t, ?x1:t ? At , and ?x?1:t?2 ? At?2 ,
ft (x1:t ) = ft (x?1:t?2 , xt?1 , xt ) .
(5)
This adversary is more general than the switching costs adversary because his loss functions can
depend on the previous action in an arbitrary way. We can further strengthen this adversary and
2
de?ne the bounded memory adaptive adversary, which has a bounded memory of an arbitrary size.
In other words, this adversary is allowed to set his loss function based on the player?s m most recent
past actions, where m is a prede?ned parameter. Formally, the bounded memory adversary must
choose loss functions that satisfy, ?t, ?x1:t ? At , and ?x?1:t?m?1 ? At?m?1 ,
ft (x1:t ) = ft (x?1:t?m?1 , xt?m:t ) .
In the information theory literature, this setting is called individual sequence prediction against loss
functions with memory [18].
In addition to the adversary types described above, the bounded memory adaptive adversary has
additional interesting special cases. One of them is the delayed feedback oblivious adversary of
[19], which de?nes an oblivious loss sequence, but reveals each loss value with a delay of m rounds.
Since the loss at time t depends on the player?s action at time t ? m, this adversary is a special case
of a bounded memory adversary with a memory of size m. The delayed feedback adversary is not a
focus of our work, and we present it merely as an interesting special case.
So far, we have de?ned a succession of adversaries of different strengths. This paper?s goal is
to understand the upper and lower bounds on the player?s regret when he faces these adversaries.
Speci?cally, we focus on how the expected regret depends on the number of rounds, T , with either
full-information or bandit feedback.
1.1
The Current State of the Art
Different aspects of this problem have been previously studied and the known results are surveyed
below and summarized in Table 1. Most of these previous results rely on the additional assumption
that the range of the loss functions is bounded in a ?xed interval, say [0, C]. We explicitly make note
of this because our new results require us to slightly generalize this assumption.
As mentioned above, the oblivious adversary has been studied extensively and is the best understood of all the adversaries discussed in this paper. With full-information feedback, both the Hedge
algorithm
[15, 11] and the follow the perturbed
?
? leader (FPL) algorithm [14] guarantee a regret of
O( T ), with a matching lower bound of ?( T ) ?see, e.g., [8]. Analyses of Hedge in settings
where the loss range may vary over time have also been considered ?see, e.g., [9]. The oblivious
setting with bandit feedback, where the player only observes the incurred loss ft (X1:t ), is called the
nonstochastic (or adversarial) multi-armed
bandit problem. In this setting, the Exp3 algorithm of [4]
?
guarantees the same
regret
O(
T
)
as
the
full-information
setting, and clearly the full-information
?
lower bound ?( T ) still applies.
The follow the lazy leader (FLL) algorithm of [14] is designed for the switching costs setting with
full-information feedback. The analysis of FLL guarantees that the oblivious component of the
player?s expected regret (without ?
counting the switching costs), as well as?the expected number of
switches, is upper bounded by O( T ), implying an expected regret of O( T ).
The work in [3] focuses on the bounded memory adversary with bandit feedback and guarantees
an expected regret of O(T 2/3 ). This bound naturally extends to the full-information setting. We
note that [18, 12] study this problem in a different feedback model, which we call counterfactual
feedback, where the player receives a full description of the history-dependent function ft at the
?end
of round t. In this setting, the algorithm presented in [12] guarantees an expected regret of O( T ).
Learning with bandit feedback and switching costs has mostly been considered in the economics
literature, using a different setting than ours and with prior knowledge assumptions (see [13] for
an overview). The setting of stochastic oblivious adversaries (i.e., oblivious loss functions sampled
i.i.d. from a ?xed distribution) was ?rst studied by [2], where they show that O(log T ) switches are
suf?cient to asymptotically guarantee logarithmic regret. The paper [20] achieves logarithmic regret
nonasymptotically with O(log T ) switches.
Several other papers discuss online learning against ?adaptive? adversaries [4, 10, 16, 17], but these
results are not relevant to our work and can be easily misunderstood. For example, several bandit
?
algorithms have extensions to the ?adaptive? adversary case, with a regret upper bound of O( T )
[1]. This bound doesn?t contradict the ?(T ) lower bound for general adaptive adversaries mentioned
3
oblivious
?
O
?
?
O
?
?
?T
T
?
?T
T
switching cost memory of size 1 bounded memory
Full-Information Feedback
?
2/3
T?2/3
?T
? T
T
T
T ? T 2/3
Bandit Feedback
?
T 2/3
T ? T 2/3
?
T 2/3
T ? T 2/3
?
T 2/3
T ? T 2/3
adaptive
T
T
T
T
Table 1: State-of-the-art upper and lower bounds on regret (as a function of T ) against different
adversary types. Our contribution to this table is presented in bold face.
earlier, since these papers use the regret de?ned in Eq. (2) rather than the regret used in our work,
de?ned in Eq. (1).
Another related body of work lies in the ?eld of competitive analysis ?see [5], which also deals
with loss functions that depend on the player?s past actions, and the adversary?s memory may even
be unbounded. However, obtaining sublinear regret is generally impossible in this case. Therefore,
competitive analysis studies much weaker performance metrics such as the competitive ratio, making
it orthogonal to our work.
1.2
Our Contribution
In this paper, we make the following contributions (see Table 1):
? Our main technical contribution is a new lower bound on regret that matches the existing
upper bounds in several of the settings discussed above. Speci?cally, our lower bound
applies to the switching costs adversary with bandit feedback and to all strictly stronger
adversaries.
? Building on this lower bound, we prove another regret lower bound in the bounded memory
setting with full-information feedback, again matching the known upper bound.
? We con?rm that existing upper bounds on regret hold in our setting and match the lower
bounds up to logarithmic factors.
? Despite the lower bound, we show that for switching costs and bandit feedback, if we
also?assume stochastic i.i.d. losses, then one can get a distribution-free regret bound of
O( T log log log T ) for ?nite action sets, with only O(log log T ) switches. This result
uses ideas from [7], and is deferred to the supplementary material.
Our new lower bound is a signi?cant step towards a complete understanding of adaptive adversaries;
observe that the upper and lower bounds in Table 1 essentially match in all but one of the settings.
Our results have two important??consequences.
First, observe that the optimal? regret
?
? against the
switching costs adversary is ? T with full-information feedback, versus ? T 2/3 with bandit
feedback. To the best of our knowledge, this is the ?rst theoretical con?rmation that learning with
bandit feedback is strictly harder than learning with full-information, even on a small ?nite action set
and even in terms of the dependence on T (previous gaps we are aware of were either in terms of the
number of actions [4], or required
action spaces ?see, e.g., [6, 21]). Moreover,
?
?? large or continuous
recall the regret bound of O T log log log T against the stochastic i.i.d. adversary with switching
costs and bandit feedback. This demonstrates that dependencies in the loss process? must? play a
crucial role in controlling the power of the switching costs adversary. Indeed, the ? T 2/3 lower
bound proven in the next section heavily relies on such dependencies.
Second, observe that ?
in the full-information feedback case, the optimal regret against a switching
costs adversary is ?( T ), whereas the optimal regret against the more general bounded memory
adversary is ?(T 2/3 ). This is somewhat surprising given the ideas presented in [18] and later extended in [3]: The main technique used in these papers is to take an algorithm originally designed
for oblivious adversaries, forcefully prevent it from switching actions very often, and obtain a new
algorithm that guarantees a regret of O(T 2/3 ) against bounded memory adversaries. This would
4
seem to imply that a small number of switches is the key to dealing with general bounded memory
adversaries. Our result contradicts this intuition by showing that controlling the number of switches
is easier then dealing with a general bounded memory adversary.
As noted above, our lower bounds require us to slightly weaken the standard technical assumption
that loss values lie in a ?xed interval [0, C]. We replace it with the following two assumptions:
1. Bounded range. We assume that the loss values on each individual round are bounded
in an interval of constant size C, but we allow this interval to drift from round to round.
Formally, ?t, ?x1:t ? At and ?x?1:t ? At ,
?
?
?ft (x1:t ) ? ft (x?1:t )? ? C .
(6)
2. Bounded drift. We also assume that the drift of each individual action from round
??to round
?
is contained in a bounded interval of size Dt , where Dt may grow slowly, as O log(t) .
Formally, ?t and ?x1:t ? At ,
?
?
?ft (x1:t ) ? ft+1 (x1:t , xt )? ? Dt .
(7)
Since these assumptions are a relaxation of the standard assumption, all of the known lower bounds
on regret automatically extend to our relaxed setting. For our results to be consistent with the current
state of the art, we must also prove that all of the known upper bounds continue to hold after the
relaxation, up to logarithmic factors.
2
Lower Bounds
In this section, we prove lower bounds on the player?s expected regret in various settings.
2.1
?(T 2/3 ) with Switching Costs and Bandit Feedback
We begin with a ?(T 2/3 ) regret lower bound against an oblivious adversary with switching costs,
when the player receives bandit feedback. It is enough to consider a very simple setting, with only
two actions, labeled 1 and 2. Using the notation introduced earlier, we use ?1 , ?2 , . . . to denote the
oblivious sequence of loss functions chosen by the adversary before adding the switching cost.
Theorem 1. For any player strategy that relies on bandit feedback and for any number of rounds T ,
are oblivious with switching costs, with a range bounded
there exist loss functions f1 , . . . , fT that?
1 2/3
T .
by C = 2, and a drift bounded by Dt = 3 log(t) + 16, such that E[RT ] ? 40
The full proof is given in the supplementary material, and here we give an informal proof sketch.
We begin by constructing a randomized adversarial strategy, where the loss functions ?1 , . . . , ?T are
an instantiation of random variables Lt , . . . , LT de?ned as follows. Let ?1 , . . . , ?T be i.i.d. standard
Gaussian random variables (with zero mean and unit variance) and let Z be a random variable that
equals ?1 or 1 with equal probability. Using these random variables, de?ne for all t = 1 . . . T
Lt (1) =
t
?
?s ,
Lt (2) = Lt (1) + ZT ?1/3 .
(8)
s=1
In words, {Lt (1)}Tt=1 is simply a Gaussian random walk and {Lt (2)}Tt=1 is the same random walk,
slightly shifted up or down ?see ?gure 1 for an illustration. It is straightforward to con?rm that this
loss sequence has a bounded range, as required by the theorem: by construction we have |?t (1) ?
?t (2)| = T ?1/3 ? 1 for all t, and since the switching cost can add at most 1 to the loss on each
round, we conclude that |ft (1) ? ft (2)| ? 2 for all t. Next, we show that the expected regret
of any player against this random loss sequence is ?(T 2/3 ), where expectation is taken over the
randomization of both the adversary and the player. The intuition is that the player can only gain
information about which action is better by switching between them. Otherwise, if he stays on
the same action, he only observes a random walk, and gets no further information. Since the gap
between the two losses on each round is T ?1/3 , the player must perform ?(T 2/3 ) switches before
he can identify the better action. If the player performs that many switches, the total regret incurred
due to the switching costs is ?(T 2/3 ). Alternatively, if the player performs o(T 2/3 ) switches, he
5
?t (1)
?t (2)
2
0
?2
5
10
15
t
20
25
30
Figure 1: A particular realization of the random loss sequence de?ned in Eq. (8). The sequence of
losses for action 1 follows a Gaussian random walk, whereas the sequence of losses for action 2
follows the same random walk, but slightly shifted either up or down.
can?t identify the better action; as a result he suffers an expected regret of ?(T ?1/3 ) on each round
and a total regret of ?(T 2/3 ).
Since the randomized loss sequence de?ned in Eq. (8), plus a switching cost, achieves an expected
regret of ?(T 2/3 ), there must exist at least one deterministic loss sequence ?1 . . . ?T with a regret of
?(T 2/3 ). In our proof, we show that there exists such ?1 . . . ?T with bounded drift.
2.2
?(T 2/3 ) with Bounded Memory and Full-Information Feedback
We build on Thm. 1 and prove a ?(T 2/3 ) regret lower bound in the full-information setting, where
we get to see the entire loss vector on every round. To get this strong result, we need to give the
adversary a little bit of extra power: memory of size 2 instead of size 1 as in the case of switching
costs. To show this result, we again consider a simple setting with two actions.
Theorem 2. For any player strategy that relies on full-information feedback and for any number of
rounds T ? 2, there exist loss functions f1 , . . . ,?
fT , each with a memory of size m = 2, a range
1
bounded by C = 2, and a drift bounded by Dt = 3 log(t) + 18, such that E[RT ] ? 40
(T ? 1)2/3 .
The formal proof is deferred to the supplementary material and a proof sketch is given here. The
proof is based on a reduction from full-information to bandit feedback that might be of independent
interest. We construct the adversarial loss sequence as follows: on each round, the adversary assigns
the same loss to both actions. Namely, the value of the loss depends only on the player?s previous two
actions, and not on his action on the current round. Recall that even in the full-information version of
the game, the player doesn?t know what the losses would have been had he chosen different actions
in the past. Therefore, we have made the full-information game as dif?cult as the bandit game.
Speci?cally, we construct an oblivious loss sequence ?1 . . . ?T as in Thm. 1 and de?ne
ft (x1:t ) = ?t?1 (xt?1 ) + I{xt?1 ?=xt?2 } .
(9)
In words, we de?ne the loss on round t of the full-information game to be equal to the loss on round
t ? 1 of a bandits-with-switching-costs game in which the player chooses the same sequence of
actions. This can be done with a memory of size 2, since the loss in Eq. (9) is fully speci?ed by the
player?s choices on rounds t, t ? 1, t ? 2. Therefore, the ?(T 2/3 ) lower bound for switching costs
and bandit feedback extends to the full-information setting with a memory of size at least 2.
3
Upper Bounds
In this section, we show that the known upper bounds on regret, originally proved for bounded
losses, can be extended to the case of losses with bounded range and bounded drift. Speci?cally, of
the upper bounds that appear in Table 1, we prove the following:
?
? O( T ) for an oblivious adversary with switching costs, with full-information feedback.
?
? hides logarithmic factors).
? T ) for an oblivious adversary with bandit feedback (where O
? O(
? 2/3 ) for a bounded memory adversary with bandit feedback.
? O(T
6
The remaining upper bounds in Table 1 are either trivial or follow from the principle that an upper
bound still holds if we weaken the adversary or provide a more informative feedback.
3.1
?
O( T ) with Switching Costs and Full-Information Feedback
In this setting, ft (x1:t ) = ?t (xt ) + I{xt ?=xt?1 } . If the oblivious losses ?1 . . . ?T (without the additional switching costs) were all bounded
? in [0, 1], the Follow the Lazy Leader (FLL) algorithm of
[14] would guarantee a regret of O( T ) with respect to these losses (again, without the additional
?
switching costs). Additionally, FLL guarantees that its expected number of switches is O( T ).
We use a simple reduction to extend these guarantees to loss functions with a range bounded in an
interval of size C and with an arbitrary drift.
On round t, after choosing
the loss function ?t , the player de?nes the modi?
? an action and receiving
1
?ed loss ??t (x) = C?1
?t (x) ? miny ?t (y) and feeds it to the FLL algorithm. The FLL algorithm
then chooses the next action.
Theorem 3. If each of the loss functions f1 , f2 , . . . is oblivious with switching
costs and has a range
?
bounded by C then the player strategy described above attains O(C T ) expected regret.
The full proof is given in the supplementary material but the proof technique is straightforward. We
?rst show that each ??t is bounded in [0, 1] and therefore the standard regret bound for FLL holds
?
?
with respect to the sequence of modi?ed
? loss functions ?1 , ?2 , . . .. Then we show that the guarantees
provided for FLL imply a regret of O( T ) with respect to the original loss sequence f1 , f2 , . . ..
3.2
?
? T ) with an Oblivious Adversary and Bandit Feedback
O(
In this setting, ft (x1:t ) simply equals ?t (xt ). The reduction described in the previous subsection
cannot be used in the bandit setting, since minx ?t (x) is unknown to the player, and a different
reduction is needed. The player sets a ?xed horizon T and focuses on controlling his regret at time
T ; he can then use a standard doubling trick [8] to handle an in?nite horizon. The player uses the
fact that each ft has a range bounded by C. Additionally, he de?nes D = maxt?T Dt and on each
round he de?nes the modi?ed loss
? 1
?
1
(10)
ft? (x1:t ) =
?t (xt ) ? ?t?1 (xt?1 ) + .
2(C + D)
2
Note that ft? (X1:t ) can be computed by the player
? using only bandit feedback. The player then feeds
ft? (X1:t ) to an algorithm that guarantees a O( T ) standard regret (see de?nition in Eq. (2)) against
a ?xed action. The Exp3 algorithm, due to [4], is such an algorithm. The player chooses his actions
according to the choices made by Exp3. The following
theorem states that this reduction results in
?
?
a bandit algorithm that guarantees a regret of O( T ) against oblivious adversaries.
Theorem 4. If each of the ??
loss functions
f1 . . . fT is oblivious with a range bounded by C?and
?
?
T)
a drift bounded by Dt = O log(t) then the player strategy described above attains O(C
expected regret.
?
The full proof is given in the supplementary material. In a nutshell, we show that
? each ft is a loss
function bounded in [0, 1] and that the analysis of Exp3 guarantees a regret of O( T ) with respect
? to
the loss sequence f1? . . . fT? . Then, we show that this guarantee implies a regret of (C +D)O( T ) =
?
?
O(C
T ) with respect to the original loss sequence f1 . . . fT .
3.3
? 2/3 ) with Bounded Memory and Bandit Feedback
O(T
Proving an upper bound against an adversary with a memory of size m, with bandit feedback,
requires a more delicate reduction. As in the previous section, we assume a ?nite horizon T and we
let D = maxt Dt . Let K = |A| be the number of actions available to the player.
Since fT (x1:t ) depends only on the last m + 1 actions in x1:t , we slightly overload our notation
and de?ne ft (xt?m:t ) to mean the same as ft (x1:t ). To de?ne the reduction, the player ?xes a base
7
action x0 ? A and for each t > m he de?nes the loss function
? 1
?
1
? ft (xt?m:t ) ? ft?m?1 (x0 . . . x0 ) + .
f?t (xt?m:t ) = ?
2
2 C + (m + 1)D
Next, he divides the T rounds into J consecutive epochs of equal length, where J = ?(T 2/3 ). We
assume that the epoch length T /J is at least 2K(m + 1), which is true when T is suf?ciently large.
At the beginning of each epoch, the player plans his action sequence for the entire epoch. He uses
some of the rounds in the epoch for exploration and the rest for exploitation. For each action in A,
the player chooses an exploration interval of 2(m + 1) consecutive rounds within the epoch. These
K intervals are chosen randomly, but they are not allowed to overlap, giving a total of 2K(m + 1)
exploration rounds in the epoch. The details of how these intervals are drawn appears in our analysis,
in the supplementary material. The remaining T /J ? 2K(m + 1) rounds are used for exploitation.
The player runs the Hedge algorithm [11] in the background, invoking it only at the beginning of
each epoch and using it to choose one exploitation action that will be played consistently on all of the
exploitation rounds in the epoch. In the exploration interval for action x, the player ?rst plays m + 1
rounds of the base action x0 followed by m + 1 rounds of the action x. Letting tx denote the ?rst
round in this interval, the player uses the observed losses ftx +m (x0 . . . x0 ) and ftx +2m+1 (x . . . x)
to compute f?tx +2m+1 (x . . . x). In our analysis, we show that the latter is an unbiased estimate of
the average value of f?t (x . . . x) over t in the epoch. At the end of the epoch, the K estimates are fed
as feedback to the Hedge algorithm.
We prove the following regret bound, with the proof deferred to the supplementary material.
Theorem 5. If each of the loss functions
?? f1 . . .?fT is has a memory of size m, a range bounded
by C, and a drift bounded by Dt = O log(t) then the player strategy described above attains
? 2/3 ) expected regret.
O(T
4
Discussion
In this paper, we studied the problem of prediction with expert advice against different types of
adversaries, ranging from the oblivious adversary to the general adaptive adversary. We proved
upper and lower bounds on the player?s regret against each of these adversary types, in both the
full-information and the bandit feedback models. Our lower bounds essentially matched our upper bounds in all but one case: the adaptive adversary
with a unit memory in the full-information
?
setting, where we only know that regret is ?( T ) and O(T 2/3 ). Our bounds have two important
consequences. First, we characterize the regret attainable with switching costs, and show a setting
where predicting with bandit feedback is strictly more dif?cult than predicting with full-information
feedback ?even in terms of the dependence on T , and even on small ?nite action sets. Second, in
the full-information setting, we show that predicting against a switching costs adversary is strictly
easier than predicting against an arbitrary adversary with a bounded memory. To obtain our results, we had to relax the standard assumption that loss values are bounded in [0, 1]. Re-introducing
this assumption and proving similar lower bounds remains an elusive open problem. Many other
questions remain unanswered. Can we characterize the dependence of the regret on the number of
actions? Can we prove regret bounds that hold with high probability? Can our results be generalized
to more sophisticated notions of regret, as in [3]?
In addition to the adversaries discussed in this paper, there are other interesting classes of adversaries
that lie between the oblivious and the adaptive. A notable example is the family of deterministically
adaptive adversaries, which includes adversaries that adapt to the player?s actions in a known deterministic way, rather than in a secret malicious way. For example, imagine playing a multi-armed
bandit game where the loss values are initially oblivious, but whenever the player chooses an arm
with zero loss, the loss of the same arm on the next round is deterministically changed to zero. Many
real-world online prediction scenarios are deterministically adaptive, but we lack a characterization
of the expected regret in this setting.
Acknowledgments
Part of this work was done while NCB was visiting OD at Microsoft Research, whose support is
gratefully acknowledged.
8
References
[1] J. Abernethy and A. Rakhlin. Beating the adaptive bandit with high probability. In COLT,
2009.
[2] R. Agrawal, M.V. Hedge, and D. Teneketzis. Asymptotically ef?cient adaptive allocation rules
for the multiarmed bandit problem with switching cost. IEEE Transactions on Automatic
Control, 33(10):899?906, 1988.
[3] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary:
from regret to policy regret. In Proceedings of the Twenty-Ninth International Conference on
Machine Learning, 2012.
[4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[5] A. Borodin and R. El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998.
[6] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv?ari. X-armed bandits. Journal of Machine
Learning Research, 12:1655?1695, 2011.
[7] N. Cesa-Bianchi, C. Gentile, and Y. Mansour. Regret minimization for reserve prices in
second-price auctions. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms
(SODA13), 2013.
[8] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[9] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction
with expert advice. Machine Learning, 66(2/3):321?352, 2007.
[10] V. Dani and T. P. Hayes. Robbing the bandit: Less regret in online geometric optimization
against an adaptive adversary. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006.
[11] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of computer and System Sciences, 55(1):119?139, 1997.
[12] A. Gyorgy and G. Neu. Near-optimal rates for limited-delay universal lossy source coding. In
IEEE International Symposium on Information Theory, pages 2218?2222, 2011.
[13] T. Jun. A survey on the bandit problem with switching costs. De Economist, 152:513?541,
2004.
[14] A. Kalai and S. Vempala. Ef?cient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291?307, 2005.
[15] N. Littlestone and M.K. Warmuth. The weighted majority algorithm. Information and Computation, 108:212?261, 1994.
[16] O. Maillard and R. Munos. Adaptive bandits: Towards the best history-dependent strategy. In
Proceedings of the Thirteenth International Conference on Arti?cial Intelligence and Statistics,
2010.
[17] H. B. McMahan and A. Blum. Online geometric optimization in the bandit setting against an
adaptive adversary. In Proceedings of the Seventeenth Annual Conference on Learning Theory,
2004.
[18] N. Merhav, E. Ordentlich, G. Seroussi, and M.J. Weinberger. Sequential strategies for loss
functions with memory. IEEE Transactions on Information Theory, 48(7):1947?1958, 2002.
[19] C. Mesterharm. Online learning with delayed label feedback. In Proceedings of the Sixteenth
International Conference on Algorithmic Learning Theory, 2005.
[20] R. Ortner. Online regret bounds for Markov decision processes with deterministic transitions.
Theoretical Computer Science, 411(29?30):2684?2695, 2010.
[21] O. Shamir. On the complexity of bandit and derivative-free stochastic convex optimization.
CoRR, abs/1209.2388, 2012.
9
| 5151 |@word exploitation:4 version:5 stronger:2 dekel:2 open:1 seek:1 attainable:3 invoking:1 eld:1 incurs:1 arti:1 harder:1 reduction:8 ours:1 interestingly:1 past:5 existing:2 current:4 od:1 surprising:1 written:1 must:5 informative:1 cant:1 designed:2 interpretable:1 implying:2 intelligence:1 warmuth:1 accordingly:1 cult:2 beginning:3 gure:1 characterization:1 boosting:1 unbounded:1 symposium:3 shorthand:1 prove:7 manner:1 x0:6 secret:1 indeed:2 market:3 expected:18 behavior:1 multi:2 automatically:1 little:1 armed:3 begin:3 provided:1 bounded:46 notation:4 moreover:1 matched:1 what:1 xed:10 hindsight:1 guarantee:15 cial:1 every:1 charge:1 nutshell:1 universit:1 rm:2 demonstrates:1 control:2 unit:2 omit:1 appear:1 before:3 understood:1 tends:1 consequence:2 switching:39 despite:1 lugosi:1 might:1 plus:1 studied:4 dif:2 limited:1 range:12 seventeenth:2 weizmann:1 acknowledgment:1 investment:1 regret:69 differs:1 nite:7 universal:1 matching:2 word:5 get:4 cannot:1 impossible:1 measurable:1 deterministic:4 elusive:1 straightforward:2 economics:1 convex:1 survey:1 assigns:2 react:1 rule:1 his:12 proving:3 handle:1 notion:3 unanswered:1 shamir:2 imagine:2 play:2 strengthen:1 controlling:3 heavily:1 construction:1 us:4 trick:1 labeled:1 observed:1 ft:45 role:1 capture:2 trade:1 observes:6 mentioned:2 intuition:2 complexity:1 miny:1 depend:2 f2:3 easily:2 stock:7 various:1 tx:2 choosing:2 abernethy:1 whose:1 supplementary:7 say:1 relax:1 otherwise:1 statistic:1 online:11 sequence:25 agrawal:1 relevant:1 realization:1 sixteenth:1 description:1 rst:8 yaniv:1 seroussi:1 progress:1 eq:8 strong:2 signi:2 trading:3 involves:1 implies:1 stochastic:5 exploration:4 milano:1 prede:1 material:7 forcefully:1 require:2 f1:10 generalization:2 ftx:2 nonoblivious:2 randomization:1 extension:1 strictly:4 hold:5 considered:3 algorithmic:1 reserve:1 vary:1 adopt:1 smallest:1 achieves:2 consecutive:2 label:1 weighted:1 minimization:1 dani:1 clearly:2 gaussian:3 rather:2 kalai:1 focus:6 consistently:1 adversarial:3 attains:3 dependent:3 el:1 typically:1 entire:4 initially:1 bandit:45 colt:1 plan:2 constrained:2 special:5 art:3 equal:5 aware:1 construct:2 sell:1 nearly:1 oblivious:32 ortner:1 randomly:1 modi:3 individual:3 delayed:3 microsoft:3 delicate:1 ab:1 interest:1 satis:2 deferred:3 ohad:1 orthogonal:1 stoltz:2 divide:1 walk:5 re:1 littlestone:1 ncb:1 theoretical:2 weaken:2 instance:1 earlier:2 measuring:1 assignment:1 cost:39 introducing:1 delay:2 inadequate:1 characterize:3 commission:2 dependency:3 perturbed:1 chooses:7 international:4 randomized:4 siam:3 stay:1 cantly:1 receiving:1 again:4 cesa:5 choose:4 slowly:1 worse:1 expert:5 derivative:1 de:29 summarized:1 bold:1 includes:1 coding:1 satisfy:2 notable:1 explicitly:1 depends:5 later:1 competitive:4 investor:7 contribution:4 variance:1 succession:1 identify:2 generalize:1 history:4 suffers:3 whenever:2 ed:4 neu:1 against:21 naturally:1 associated:2 di:1 proof:10 con:3 sampled:2 gain:1 proved:2 counterfactual:1 recall:2 knowledge:2 subsection:1 maillard:1 sophisticated:1 auer:1 appears:1 feed:2 originally:2 dt:9 day:3 follow:4 specify:2 improved:1 done:2 sketch:2 receives:2 lack:1 lossy:1 usa:1 effect:1 building:1 true:1 unbiased:1 deal:1 round:44 game:13 noted:1 generalized:1 complete:2 tt:2 theoretic:1 performs:2 auction:1 ranging:1 mesterharm:1 novel:1 ef:2 ari:1 common:2 overview:1 extend:2 discussed:3 he:20 accumulate:1 multiarmed:2 cambridge:2 automatic:1 gratefully:1 had:2 add:1 base:2 recent:1 hide:1 italy:1 scenario:1 continue:1 nition:4 gyorgy:1 additional:6 somewhat:1 relaxed:1 gentile:1 speci:6 relates:1 full:35 multiple:1 ing:1 technical:2 match:3 exp3:4 calculation:1 adapt:1 prediction:7 essentially:2 metric:1 expectation:1 whereas:3 addition:2 background:1 szepesv:1 interval:11 thirteenth:1 grow:2 malicious:1 source:1 crucial:1 extra:1 rest:1 seem:1 call:1 ciently:1 near:1 counting:1 enough:1 switch:11 nonstochastic:2 idea:2 hardly:1 action:61 generally:2 tewari:1 detailed:1 amount:1 extensively:1 generate:1 schapire:2 exist:3 shifted:2 per:1 discrete:2 key:1 blum:1 acknowledged:1 drawn:1 prevent:1 asymptotically:2 relaxation:2 merely:1 run:1 extends:2 family:1 draw:1 decision:3 misunderstood:1 fee:1 bit:1 bound:47 followed:1 played:1 fll:8 annual:2 strength:1 constraint:2 unlimited:1 generates:1 aspect:1 argument:1 min:2 vempala:1 ned:10 according:1 remain:1 slightly:5 contradicts:1 making:1 taken:1 previously:1 remains:1 discus:3 needed:2 know:2 letting:1 fed:1 end:4 informal:1 ofer:1 available:2 observe:4 weinberger:1 original:2 remaining:2 cally:5 robbing:1 giving:1 build:1 question:1 quantity:1 strategy:10 costly:1 rt:7 dependence:3 visiting:1 minx:1 majority:2 trivial:1 studi:1 economist:1 length:2 illustration:1 ratio:1 teneketzis:1 unfortunately:1 mostly:1 merhav:1 zt:1 policy:3 unknown:1 perform:2 bianchi:5 upper:17 twenty:1 markov:1 situation:1 extended:2 mansour:2 ninth:1 arbitrary:5 thm:2 drift:11 introduced:1 namely:1 required:2 adversary:98 below:1 beating:1 borodin:1 memory:32 power:5 overlap:1 natural:1 force:2 rely:2 predicting:4 arm:2 imply:3 ne:15 arora:1 jun:1 fpl:1 prior:1 literature:3 understanding:1 epoch:11 geometric:2 nicol:1 freund:2 loss:86 fully:1 sublinear:2 interesting:3 suf:2 allocation:1 proven:1 versus:1 incurred:2 consistent:1 principle:1 playing:1 maxt:2 invests:2 changed:1 last:1 free:2 formal:1 weaker:2 understand:1 allow:1 institute:1 face:2 munos:2 feedback:47 world:1 cumulative:2 ordentlich:1 doesn:3 transition:1 made:2 adaptive:26 far:3 transaction:2 contradict:1 keep:1 dealing:3 buy:1 reveals:1 instantiation:1 hayes:1 conclude:1 leader:3 degli:1 alternatively:1 continuous:1 table:7 additionally:2 reasonably:1 obtaining:1 constructing:1 main:2 linearly:1 allowed:3 repeated:1 x1:32 body:1 advice:4 cient:3 surveyed:1 position:1 deterministically:3 lie:3 mcmahan:1 theorem:7 down:2 xt:28 showing:1 rakhlin:1 x:1 weakest:1 exists:1 adding:1 sequential:1 corr:1 horizon:3 gap:2 easier:3 logarithmic:5 lt:7 simply:3 bubeck:1 lazy:2 contained:1 doubling:1 applies:2 determines:1 relies:3 acm:2 hedge:5 goal:2 towards:2 replace:1 price:2 change:1 called:5 total:3 e:2 player:62 formally:7 support:1 latter:2 adaptiveness:1 reactive:1 overload:1 evaluate:1 |
4,589 | 5,152 | High-Dimensional Gaussian Process Bandits
Josip Djolonga
ETH Z?urich
[email protected]
Andreas Krause
ETH Z?urich
[email protected]
Volkan Cevher
EPFL
[email protected]
Abstract
Many applications in machine learning require optimizing unknown functions
defined over a high-dimensional space from noisy samples that are expensive to
obtain. We address this notoriously hard challenge, under the assumptions that
the function varies only along some low-dimensional subspace and is smooth
(i.e., it has a low norm in a Reproducible Kernel Hilbert Space). In particular, we
present the SI-BO algorithm, which leverages recent low-rank matrix recovery
techniques to learn the underlying subspace of the unknown function and applies
Gaussian Process Upper Confidence sampling for optimization of the function.
We carefully calibrate the exploration?exploitation tradeoff by allocating the
sampling budget to subspace estimation and function optimization, and obtain the
first subexponential cumulative regret bounds and convergence rates for Bayesian
optimization in high-dimensions under noisy observations. Numerical results
demonstrate the effectiveness of our approach in difficult scenarios.
1
Introduction
The optimization of non-linear functions whose evaluation may be noisy and expensive is a challenge that has important applications in sciences and engineering. One approach to this notoriously
hard problem takes a Bayesian perspective, which uses the predictive uncertainty in order to trade
exploration (gathering data for reducing model uncertainty) and exploitation (focusing sampling
near likely optima), and is often called Bayesian Optimization (BO). Modern BO algorithms are
quite successful, surpassing even human experts in learning tasks: e.g., gait control for the S ONY
AIBO, convolutional neural networks, structural SVMs, and Latent Dirichlet Allocation [1, 2, 3].
Unfortunately, the theoretical efficiency of these methods depends exponentially on the?often
high?dimension of the domain over which the function is defined. A way to circumvent this ?curse
of dimensionality? is to make the assumption that only a small number of the dimensions actually
matter. For example, the cost function of neural networks effectively varies only along a few dimensions [2]. This idea has been also at the root of nonparametric regression approaches [4, 5, 6, 7].
To this end, we propose an algorithm that learns a low dimensional, not necessarily axis-aligned,
subspace and then applies Bayesian optimization on this estimated subspace. In particular, our SIBO approach combines low-rank matrix recovery with Gaussian Process Upper Confidence Bound
sampling in a carefully calibrated manner. We theoretically analyze its performance, and prove
bounds on its cumulative regret. To the best of our knowledge, we prove the first subexponential
bounds for Bayesian optimization in high dimensions under noisy observations. In contrast to existing approaches, which have an exponential dependence on the ambient dimension, our bounds have
in fact polynomial dependence on the dimension. Moreover, our performance guarantees depend
explicitly on what we could have achieved if we had known the subspace in advance.
Previous work. Exploration?exploitation tradeoffs were originally studied in the context of finite
multi-armed bandits [8]. Since then, results have been obtained for continuous domains, starting
with the linear [9] and Lipschitz-continuous cases [10, 11]. A more recent algorithm that enjoys
theoretical bounds for functions sampled from a Gaussian Process (GP), or belong to some Repro1
ducible Kernel Hilbert Space (RKHS) is GP-UCB [12]. The use of GPs to negotiate exploration?
exploitation tradeoffs originated in the areas of response surface and Bayesian optimization, for
which there are a number of approaches (cf., [13]), perhaps most notably the Expected Improvement [14] approach, which has recently received theoretical justification [15], albeit only in the
noise-free setting.
Bandit algorithms that exploit low-dimensional structure of the function appeared first for the linear
setting, where under sparsity assumptions one can obtain bounds, which depend only weakly on the
ambient dimension [16, 17]. In [18] the more general case of functions sampled from a GP under the
same sparsity assumptions was considered. The idea of random projections to BO has been recently
introduced [19]. They provide bounds on the simple regret under noiseless observations, while we
also analyze the cumulative regret and allow noisy observations. Also, unless the low-dimensional
space is of dimension 1, our bounds on the simple regret improve on theirs. In [7] the authors approximate functions that live on low-dimensional subspaces using low-rank recovery and analysis
techniques. While providing uniform approximation guarantees, their approach is not tailored towards exploration?exploitation tradeoffs, and does not achieve sublinear cumulative regret. In [20]
the stochastic and adversarial cases for axis-aligned H?older continuous functions are considered.
Our specific contributions in this paper can be summarized as follows:
? We introduce the SI-BO algorithm for Bayesian bandit optimization in high dimensions,
admitting a large family of kernel functions. Our algorithm is a natural but non-trivial
fusion of modern low-rank subspace approximation tools with GP optimization methods.
? We derive theoretical guarantees on SI-BO?s cumulative and simple regret in highdimensions with noise. To the best of our knowledge, these are the first theoretical results
on the sample complexity and regret rates that are subexponential in the ambient dimension.
? We provide experimental results on synthetic data and classical benchmarks.
2
Background and Problem Statement
Goal. In plain words, we wish to sequentially optimize a bounded function over a compact, convex
subset D ? Rd . Without loss of generality, we denote the function by f : D ? [0, 1] and let x?
be a maximizer. The algorithm proceeds in a total of T rounds. In each round t, it asks an oracle
for the function value at some point xt and it receives back the value f (xt ), possibly corrupted by
noise. Our goal is to choose points such that their values are close to the optimum f (x? ).
As performance metric, we consider the regret, which tells us how much better we could have done
in round t had we known x? , or formally rt = f (x? ) ? f (xt ). In many applications, such as
recommender systems, robotic control, etc., we care about the quality of the points chosen at every
PT
time step t. Hence, a natural quantity to consider is the cumulative regret defined as RT = t=1 rt .
One can also consider the simple regret, defined as ST = minTt=1 rt , measuring the quality of the
best solution found so far. We will give bounds on the more challenging notion of cumulative regret,
which also bounds the simple regret via ST ? RT /T .
Low-dimensional functions in high-dimensions. Unfortunately, our problem cannot be tractably
solved without further assumptions on the properties of the function f . What is worse is that the
usual compact support and smoothness assumptions cannot achieve much: the minimax lower bound
on the sample complexity is exponential in d [21, 6, 7]. We hence assume that the function effectively varies only along a small number of true active dimensions: i.e., the function lives on a
k d-dimensional subspace. Typically, k or an upper bound on k is assumed known [4, 5, 7, 6].
Formally, we suppose that there exists some function g : Rk ? [0, 1] and a matrix A ? Rk?d with
orthogonal rows so that f (x) = g(Ax). We will additionally assume that g ? C 2 , which is necessary
to bound the errors from the linear approximation that we will make. Further, w.l.o.g., we assume
that D = Bd (1+ ??) for some ?? > 0, where we define Bd (r) to be the closed ball around 0 of radius r
in Rd .1 To be able to recover the subspace we also need the condition that g has Lipschitz continuous
second order derivatives and a full rank Hessian at 0, which is satisfied for many functions [7].
Smooth, low-complexity functions. In addition to the low-dimensional subspace assumption, we
also assume that g is smooth. One way to encode our prior is to assume that the function g resides in
1
Our method method can be extended to any convex compact set, see Section 5.2 in [22].
2
Algorithm 1 The SI-BO algorithm
Require: mX , m? , ?, ?, k, oracle for f , kernel ?
C ? mX samples uniformly from Sd?1
for i ? 1 to mX do
?
?i ? m? samples uniformly from {?1/ m? }k
y ? compute using Equation 1
? DS ? Dantzig Selector using y, see Equation 2 and compute the SVD X
? DS = U
??
? V? T
X
(k)
?
?
? y) pairs queried so far
A ? U // Principal k vectors of U? , D ? all (Ax,
Use GP inference to obtain ?1 (?), ?1 (?).
for t ? 1 to T ? mX (m? + 1) do
1/2
zt ? arg maxz ?t (z) + ?t ?t (z) , yt ? f (A?T zt ) + noise , D.add(zt , yt )
a Reproducing Kernel Hilbert Space (RKHS; cf., [23]), which allows us to quantify g?s complexity
via its norm kgkH? . The RKHS for some positive semidefinite kernel ?(?, ?) can be constructed by
Pn
completing the set of functions i=1 ?i ?(xi , ?) under a suitable inner product. In this work, we use
isotropic kernels, i.e., those that depend only on the distance between points, since the problem is
rotation invariant and we can only recover A up to some rotation.
Here is a final summary of our problem and its underlying assumptions:
1. We wish to maximize f : Bd (1 + ??) ? [0, 1], where f (x) = g(Ax) for some matrix
A ? Rk?d with orthogonal rows and g belongs to some RKHS H? .
2. The kernel ? is isotropic ?(x, x0 ) = ?0 (x ? x0 ) = ?00 (kx ? x0 k2 ) and ?0 is continuous,
integrable and with a Fourier transform F?0 that is isotropic and radially non-increasing.2
3. The function g has Lipschitz continuous 2nd -order derivatives and a full rank Hessian at 0.
4. The function g is C 2 on a compact support and max|?|?2 kD? gk? ? C2 for some C2 > 0.
5. The oracle noise is Gaussian with zero mean with a known variance ? 2 .
3
The SI-BO Algorithm
The SI-BO algorithm performs two separate exploration and exploitation stages: (1) subspace identification (SI), i.e. estimating the subspace on which the function is supported, and then (2) Bayesian
optimization (BO), in order to optimize the function on the learned subspace. A key challenge here
is to carefully allocate samples between these phases.
We first give a detailed outline for SI-BO in Alg. 1, deferring its theoretical analysis to Section 4.
Given the (noisy) oracle for f , we first evaluate the function at several suitably chosen points and
then use a low-rank recovery algorithm to compute a matrix A? that spans a subspace well aligned
? similarly to [22, 7], we
with the one generated by the true matrix A. Once we have computed A,
T
T
?
?
define the function which we optimize as g?(z) = f (A z) = g(AA z). Thus, we effectively work
? = g(AA?T Ax).
?
with an approximation f? to f given by f?(x) = g?(Ax)
With the approximation at
hand, we apply BO, in particular the GP-UCB algorithm, on g? for the remaining steps.
Subspace Learning. We learn A using the approach from [7], which reduces the learning problem to that of low rank matrix recovery. We construct a set of mX points C = [?1 , ? ? ? , ?mX ],
which we call sampling centers, and consider the matrix X of gradients at those points X =
[?f (?1 ), ? ? ? , ?f (?mX )]. Using the chain rule, we have X = AT [?g(A?1 ), ? ? ? , ?g(A?mX )].
Because A is a matrix of size k ? d it follows that the rank of X is at most k. This suggests that
using low-rank approximation techniques, one may be able to (up to rotation) infer A from X.
Given that we have no access to the gradients of f directly, we approximate X using a linearization
of f . Consider a fixed sampling center ?. If we make a linear approximation with step size ? to the
directional derivative at center ? in direction ? then, by Taylor?s theorem, for a suitable ?(x, ?, ?):
1
?
h?, AT ?g(A?)i = (f (? + ??) ? f (?)) ? ?T ?2 f (?)? .
?
|2
{z
}
E(?,?,?)
2
This is the same assumption as in [15]. Radially non-increasing means that if kwk ? kw0 k then F?0 (w) ?
F?0 (w0 ). Note that this is satisfied by the RBF and Mat`ern kernels.
3
Thus, sampling the finite difference f (? + ??) ? f (?) provides (up to the curvature error E(?, ?, ?),
and sampling noise) information about the one-dimensional subspace spanned by AT ?g(A?).
To estimate it accurately, we must observe multiple directions ?. Further, to infer the full kdimensional subspace A, we need to consider at least mX ? k centers. Consequently, for
each center ?i , we define a set of m? directions and arrange them in a total of m? matrices
?i = [?i,1 , ?i,2 , ? ? ? , ?i,mX ] ? Rd?mX . We can now define the following linear system:
m
y = A(X) + e + z,
yi =
X
1X
(f (?j + ??i,j ) ? f (?j )),
? j=1
(1)
where the linear operator A is defined as A(X)i = tr(?Ti X), the curvature errors have been accumulated in e and the noise has been put in the vector z which is distributed as zi ? N (0, 2mX ? 2 /?).
Given the structure of the problem, we can make use of several low-rank recovery algorithms. For
concreteness, we choose the Dantzig Selector (DS, [24]), which recovers low rank matrices via
minimize kM k?
M
subject to kA? (y ? A(M ))k ? ?,
| {z }
(2)
residual
where k?k? is the nuclear norm and k?k is the spectral norm. The DS will successfully recover a
? close to the true solution in the Frobenius norm and moreover this distance decreases
matrix X
linearly with ?. As shown in [7], choosing the centers C uniformly at ?
random from the unit sphere
Sd?1 , choosing each direction vector uniformly at random from {?1/ m? }k , and?in the case of
? w.h.p., as long as m?
noisy observations, resampling f repeatedly?suffices to obtain an accurate X
and mX are sufficiently large. The precise choices of these quantities are analyzed in Section 4.
? by taking its top k left singular vectors.
Finally, we extract the matrix A? from the SVD of X,
?
Because the DS will find a matrix X close to X, due to a result by Wedin [25] we know that the
learned subspace will be close to the true one.
? we optimize the function g?(z) = f (A?T z) on the
Optimizing g?. Once we have an approximate A,
k
low-dimensional domain Z = B (1+ ??). Concretely, we use GP-UCB [12], because it exhibits state
of the art empirical performance, and enjoys strong theoretical bounds for the cumulative regret. It
requires that g? belongs to the RKHS and the noise when conditioned on the history is zero-mean and
almost surely bounded by some ?
? . Section 4 shows that this is indeed true with high probability.
In order to trade exploration and exploitation, the GP-UCB algorithm computes, for each point z,
a score that combines the predictive mean that we have inferred for that point with its variance,
which quantifies the uncertainty in our estimate. They are combined linearly with a time-dependent
weighting factor ?t in the following surrogate function
1/2
ucb(z) = ?t (z) + ?t
?t (z)
(3)
for a suitably chosen ?t = 2B + 300?t log3 (t/?). Here, B is an upper bound on the squared RKHS
norm of the function that we optimize, ? is an upper bound on the failure probability and ?t depends
on the kernel [12]: cf., Section 4.3 The algorithm then greedily maximizes the ucb score above.
Note that finding the maximum of this non-convex and in general multi-modal function, while
considered to be cheaper than evaluating f at a new point, is by itself a hard problem and it is
usually approached by either sampling on a grid in the domain, or using some global Lipschitz
optimizer [13]. Hence, by reducing the dimension of the domain Z over which we have to optimize,
our algorithm has the additional benefit that this process can be performed more efficiently.
Handling the noise. The last ingredient that we need is theory on how to pick ?
? so that it bounds
the noise during the execution of GP-UCB w.h.p., and how to select ? in (2) so that the true matrix
X is feasible in the DS. Due to the fast decay of the tails of the Gaussian distribution we can pick
1/2
1
?, where T is the number of GP-UCB iterations and ? 2 is the
?
? = 2 log 1? + 2 log T + log 2?
variance of the noise. Then the noise will be trapped in [??
?, ?
? ] with probability at least 1 ? ?.
3
If the bound B is not known beforehand then one can use a doubling trick.
4
10
10
5
5
true subspace
0
g
? (x)
f(x, y)
0
?5
?5
?10
?10
?15
?15
?20
2
2
1
1
0
?1
y
?20
?2
0
?1
?2
?2
?1.5
?1
?0.5
x
0
x
0.5
1
1.5
2
Figure 1: A 2-dimensional function f (x, y) varying along a 1-dimensional subspace and its projections on different subspaces. The numbers are the respective cosine distances.
The analysis on ? comes from [7]. They bound kA? (e + z)k using the assumption that the second
order derivatives are bounded and, as shown in [24], because z has a Gaussian distribution,
?
5 mX m? ?
C2 ?dmX k 2
kA? (e + z)k ? 1.2
(4)
+
?
2 m?
?
If there is no noise it still holds by setting ? = 0. This bound, intuitively, relates the approximation
quality ? of the subspace to the quantities m? , mX as well as the step size ?.
4
Theoretical Analysis
Overview. A crucial choice in our algorithm is how to allocate samples (by choosing m? and mX
appropriately) to the tasks of subspace learning and function optimization. We now analyze both
phases, and determine how to split the queries in order to optimize the cumulative regret bounds.
Let us first consider the regret incurred in the second phase, in the ideal (but unrealistic) case that
the subspace is estimated exactly (i.e.,?A? = A). This question was answered recently in [12], where
?
it is proven that it is bounded by O? ( T (B ?t + ?t )) 4 . Hereby, the quantity ?T is defined as
?T = max H(yS ) ? H(yS |f ),
S?D,|S|=T
where yS are the values of f at the points in S, corrupted by Gaussian noise, and H(?) is the entropy.
It quantifies the maximal gain in information that we can obtain about f by picking a set of T points.
In [12] sublinear bounds for ?T have been computed forseveral popular kernels. For example, for
the RBF kernel in k dimensions, ?T = O (log T )k+1 . Further, B is a bound on the squared
2
norm kgkH? of g w.r.t. kernel ?. Note that generally ?T grows exponentially with k, rendering the
application of GP-UCB directly to the high-dimensional problem intractable.
What happens if the subspace A? is estimated incorrectly? Fortunately, w.h.p. the estimated function
g? still remains in the RKHS associated with kernel ?. However, the norm k?
g kH? may increase, and
?
consequently may the regret. Moreover, the considered f disagrees with the true f , and consequently
additional regret per sample may be incurred by ? = ||f? ? f ||? . As an illustration of the effect of
misestimated subspaces see Figure 1. We can observe that subspaces far from the true one stretch
the function more, thus increasing its RKHS norm.
We now state a general result that formalizes these insights by bounding the cumulative regret in
terms of the samples allocated to subspace learning, and the subspace approximation quality.
Lemma 1 Assume that we spend 0 < n ? T samples to learn the subspace such that kf ?f?k? ? ?,
k?
g k ? B and the error is bounded by ?
? , each w.p. at least 1 ? ?/4. If we run the GP-UCB
algorithm for the remaining T ? n steps with the suggested ?
? and ?/4, then the following bound on
the cumulative regret holds w.p. at least 1 ? ?
?
?
RT ? n + ?T
+ O? ( T (B ?t + ?t ))
|{z}
|
{z
}
approx. error
4
RUCB (T,?
g ,?)
We have used the notation O? (f ) = O(f log f ) to suppress the log factors. ?? (?) is analogously defined.
5
cos ? = [1.00, 1.00]
cos ? = [0.04, 0.00]
cos ? = [0.99, 0.04]
cos ? = [0.97, 0.95]
2
2
2
2
1
1
1
1
0
0
0
0
?1
?1
?1
?1
?2
?2
?1
0
1
?2
2 ?2
?1
0
1
?2
2 ?2
?1
0
1
?2
2 ?2
?1
0
1
2
Figure 2: Approximations g? resulting from differently aligned subspaces. Note that inaccurate
estimation (the middle two cases) can wildly distort the objective.
where RUCB (T, g?, ?) is the regret of GP-UCB when run for T steps using g? and kernel ? 5 .
Lemma 1 breaks down the regret in terms of the approximation error incurred by subspace2
misestimation, and the optimization error incurred by the resulting increased complexity k?
g kH? ?
B. We now analyze these effects, and then prove our main regret bounds.
Effects of Subspace Alignment. A notion that will prove to be very helpful for analyzing both,
the approximation precision ? and the norm of g?, is the set of angles between the subspaces that are
? The following definition [26] makes this notion precise.
defined by A and A.
Definition 2 Let A, A? ? Rk?d be two matrices with orthogonal rows so that AAT = A?A?T = I. We
? to be equal to the singular
define the vector of cosines between the spanned subspaces cos ?(A, A)
T
2
1/2
? i = (1 ? cos ?(A, A)
? ) .
values of AA? . Analogously sin ?(A, A)
i
Let us see how A? affects g?. Because g?(z) = g(AA?T z), the matrix M = AA?T , which converts
any point from its coordinates determined by A? to the coordinates defined by A, will be of crucial
importance. First, note that its singular values are cosines and are between ?1 and 1. This means
that it can only shrink the vectors that we apply it to (possibly by different amounts in different
directions). The effect on g? is that it might only ?see? a small part of the whole space, and its shape
might be distorted, which in turn will increase its RKHS complexity (see Figure 2 for an illustration).
Lemma 3 If g ? H? for a kernel that is isotropic with a radially non-increasing
? Fourier transform
and g?(x) = g(AA?T x) for some A, A? with orthogonal rows, then for C = C2 2k(1 + ??),
2
? ?1 kgk2 .
?
(5)
kf ? f?k ? Cksin ?(A, A)k
and k?
g k ? | prod cos ?(A, A)|
?
H?
2
H?
Qd
Here, we use the notation prod x = i=1 xi to denote the product of the elements of a vector. By
decreasing the angles we tackle both issues: the approximation error ? = kf ? f?k? is reduced
and the norm of g? gets closer to the one of g. There is one nice interpretation of the product of the
cosines. It is equal to the determinant of the matrix M . Hence, g? will not be in the RKHS only if M
is rank deficient as dimensions are collapsed.
Regret Bounds. We now present our main bounds on the cumulative regret. In order to achieve
sublinear regret, we need a way of controlling ? and k?
g kH? . In the following, we show how this
goal can be achieved. As it turns out, subspace learning is substantially harder in the case of noisy
observations. Therefore, we focus on the easier, noise-free setting first.
Noiseless Observations. We should note that the theory behind GP-UCB still holds in the deterministic case, as it only requires the noise to be bounded a.s. by ?
? . The following theorem guarantees
that ?
in this setting for non-linear kernels we have a regret dominated by GP-UCB, which is of order
?? ( T ?T ), as it is usually exponential in k.
1
Theorem 4 If the observations are noiseless we can pick mx = O(kd log 1/?), ? = k2.25 d3/2
T 1/2
and m? = O(k 2 d log 1/?) so that with probability at least 1 ? ? we have the following
RT ? O(k 3 d2 log2 (1/?)) + 2 RUCB (T, g, ?).
5
Because the noise
? depends on T , we have to slightly change the bounds from [12] as we have
p parameter ?
a term of order O( log T + log(1/?)); c.f. supplementary material.
6
Noisy Observations. Equation 4 hints that the noise can have a dramatic effect in learning efficiency. As already mentioned, the DS gets better results as we decrease ?. In the noiseless case, it
suffices to increase the number of directions m? and decrease the step size ? in estimating the finite
differences. However, the second term in ? can only be reduced by decreasing the variance ? 2 .
As a result, each point that we evaluate is sampled n times and we
? take as its value the average.
Moreover, note that because the standard deviation decreases as 1/ n, we have to resample at least
??2 times and this significantly increases the number of samples that we need. Nevertheless, we
are able to obtain cumulative regret bounds (and thereby the first convergence guarantees and rates)
for this setting, which only polynomially depend on d. Unfortunately, the dependence on T is now
weaker than those in the noiseless setting (Theorem 4), and the regret due to the subspace learning
might dominate that of GP-UCB.
1
Theorem 5 If the observations are noisy, we can pick ? = k2.25 d1.5
and all other parameters
T 1/5
as in the previous theorem. Moreover, we have to resample each point O(? 2 k 2 dT 2/5 m? /?2 ) times.
Then, with probability at least 1 ? ?
RT ? O ? 2 k 11.5 d7 T 4/5 log3 (1/?) + 2 RUCB (T, g, ?).
Mismatch on the effective dimension k. All models are imperfect in some sense and the structure
of a general f is impossible to identify unless we have further scientific evidence beyond the data.
In our case, the assumption f (x) = g(Ax) for some k more or less takes the weakest form for
indicating our hope that BO can succeed from a sub-exponential sample size. In general, we must
tune k in a degree to reflect the anticipated complexity in the learning problem. Fortunately, all
the guarantees are preserved if we assume a k > ktrue , for some true synthetic model, where
f (x) = g(Ax) holds. Underfitting k leads to additional errors that are well-controlled in low-rank
subspace estimation [24]. The impact of under fitting in our setting is left for future work.
5
Experiments
The main intent of our experiments is to provide a proof of concept, confirming that SI-BO not just
in theory provides the first subexponential regret bounds, but also empirically obtains low average
regret for Bayesian optimization in high dimensions.
Baselines. We compare SI-BO against the following baseline approaches:
? RandomS-UCB, which runs GP-UCB on a random subspace.
? RandomH-UCB, which runs GP-UCB on the high-dimensional space. At each iteration
we pick 1000 points at random and choose the one with highest UCB score.
? Exact-UCB, which runs GP-UCB on the exact (but in practice unknown) subspace.
The ?t parameter in the GP-UCB score was set as recommended in [12] for finite sets. To optimize
the UCB score we sampled on a grid on the low dimensional subspace. For all of the measurements
we have added Gaussian zero-mean noise with ? = 0.01.
Data sets.
We carry out experiments in the following settings:
? GP Samples. We generate random 2-dimensional samples from a GP with Mat`ern kernel
with smoothness parameter ? = 5/2, length scale ` = 1/2 and signal variance ?f2 = 1.
The samples are ?hidden? in a random 2-dimensional subspace in 100 dimensions.
? Gab`or Filters. The second data set is inspired by experimental design in neuroscience
[27]. The goal is to determine visual stimuli that maximally excite some neuron, which
reacts to edges in the images. We consider the function f (x) = exp(?(?T x ? 1)2 ), where
? is a Gab?or filter of size 17 ? 17 and the set of admissible signals is [0, 1]d .
In the appendix we also include results for the Branin function, a classical optimization benchmark.
Results. The results are presented in Figure 3. We show the averages of 20 runs (10 runs for
GP-Posterior) and the shaded areas represent the standard error around the mean. We show both
the average regret and simple regret (i.e., suboptimality of the best solution found so far). We find
that although SI-BO spends a total of mX (m? + 1) samples to learn the subspace and thus incurs
7
1
0.8
1
1
0.8
0.8
RandomH?UCB
RandomS?UCB
0.6
RandomS?UCB
0.4
0.2
UCB?3
0.6
UCB?1
Rt /t
0.6
Rt /t
Rt /t
Our approach
0.4
RandomH?UCB
0.2
Exact?UCB
0.4
Our approach
0.2
UCB?2
Exact?UCB
0
0
1000
2000
Number of samples
0
3000 3500
1
UCB?1
UCB?2
UCB?3
0.4
0.2
0.6
0.4
0.2
0
0
0.8
Simple Regret
RandomH?UCB
Simple Regret
Our approach
RandomS?UCB
RandomS?UCB
0.6
0.4
0.2
Exact?UCB
(d) GP-Posterior
Our approach
RandomH?UCB
Exact?UCB
1000
2000
Number of samples
2500
1
0.8
0.8
1000
1500
2000
Number of samples
(c) Gab?or
1
0.6
500
(b) GP-Posterior, Different k
(a) GP-Posterior
Simple Regret
0
500 1000 1500 2000 2500 3000 3500
Number of samples
3000 3500
0
500 1000 1500 2000 2500 3000 3500
Number of samples
(e) GP-Posterior, Different k
0
500
1000
1500
2000
Number of samples
2500
(f) Gab?or
Figure 3: Performance comparison on different datasets. Our SI-BO approach outperforms the
natural benchmarks in terms of cumulative regret, and competes well with the unrealistic ExactUCB approach that knows the true subspace A.
much regret during this phase, learning the subspace pays off, both for average and simple regret,
and SI-BO ultimately outperforms the baseline methods on both data sets. This demonstrates the
value of accurate subspace estimation for Bayesian optimization in high dimensions.
Mis-specified k. What happens if we do not know the dimensionality k of the low dimensional
subspace? To test this, we experimented with the stability of SI-BO w.r.t. k. We sampled 2dimensional GP-Posterior functions and ran SI-BO with k set to 1, 2 and 3. From the figure above
we can see that in this scenario SI-BO is relatively stable to this parameter mis-specification.
6
Conclusion
We have addressed the problem of optimizing high dimensional functions from noisy and expensive
samples. We presented the SI-BO algorithm, which tackles this challenge under the assumption that
the objective varies only along a low dimensional subspace, and has low norm in a suitable RKHS.
By fusing modern techniques for low rank matrix recovery and Bayesian bandit optimization in a
carefully calibrated manner, it addresses the exploration?exploitation dilemma, and enjoys cumulative regret bounds, which only polynomially depend on the ambient dimension. Our results hold
for a wide family of RKHS?s, including the popular RBF and Mat`ern kernels. Our experiments on
different data sets demonstrate that our approach outperforms natural benchmarks.
Acknowledgments. A. Krause acknowledges SNF 200021-137971, DARPA MSEE FA8650-111-7156, ERC StG 307036 and a Microsoft Faculty Fellowship. V. Cevher acknowledges MIRG268398, ERC Future Proof, SNF 200021-132548, SNF 200021-146750, and SNF CRSII2-147633.
References
[1] D. Lizotte, T. Wang, M. Bowling, and D. Schuurmans. Automatic gait optimization with gaussian process
regression. In Proc. of IJCAI, pages 944?949, 2007.
[2] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine
Learning Research, 13:281?305, 2012.
[3] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms.
In Neural Information Processing Systems, 2012.
[4] Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical
Association, 86(414):316?327, 1991.
8
[5] G. Raskutti, M.J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional linear regression over `q -balls. Information Theory, IEEE Transactions on, 57(10):6976?6994, 2011.
[6] S. Mukherjee, Q. Wu, and D. Zhou. Learning gradients on manifolds. Bernoulli, 16(1):181?207, 2010.
[7] H. Tyagi and V. Cevher. Learning non-parametric basis independent models from point queries via lowrank methods. Applied and Computational Harmonic Analysis, (0):?, 2014.
[8] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical
Society, 58(5):527?535, 1952.
[9] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research, 3:397?422, 2003.
[10] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages 681?690,
2008.
[11] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv?ari. Online optimization in X-armed bandits. In NIPS,
2008.
[12] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Information-theoretic regret bounds for gaussian
process optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250?3265,
May 2012.
[13] E. Brochu, V.M. Cora, and N. De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599, 2010.
[14] J. Mo?ckus. On bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical
Conference Novosibirsk, July 1?7, 1974, pages 400?404. Springer, 1975.
[15] A.D. Bull. Convergence rates of efficient global optimization algorithms. The Journal of Machine Learning Research, 12:2879?2904, 2011.
[16] A. Carpentier and R. Munos. Bandit theory meets compressed sensing for high dimensional stochastic
linear bandit. Journal of Machine Learning Research - Proceedings Track, 22:190?198, 2012.
[17] Y. Abbasi-Yadkori, D. Pal, and C. Szepesvari. Online-to-confidence-set conversions and application to
sparse stochastic bandits. In Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[18] B. Chen, R. Castro, and A. Krause. Joint optimization and variable selection of high-dimensional gaussian
processes. In Proc. International Conference on Machine Learning (ICML), 2012.
[19] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. de Freitas. Bayesian optimization in high dimensions
via random embeddings. In In Proc. IJCAI, 2013.
[20] H. Tyagi and B. G?artner. Continuum armed bandit problem of few variables in high dimensions. CoRR,
abs/1304.5793, 2013.
[21] R.A. DeVore and G.G. Lorentz. Constructive approximation, volume 303. Springer Verlag, 1993.
[22] M. Fornasier, K. Schnass, and J. Vybiral. Learning functions of few arbitrary linear parameters in high
dimensions. Foundations of Computational Mathematics, pages 1?34, 2012.
[23] B. Sch?olkopf and A.J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2001.
[24] E.J. Candes and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number
of noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342?2359, 2011.
[25] P. A. Wedin. Perturbation bounds in connection with singular value decomposition. BIT Numerical
Mathematics, 12(1):99?111, 1972.
[26] G.W. Stewart and J. Sun. Matrix Perturbation Theory, volume 175. Academic Press New York, 1990.
[27] J. G. Daugman. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized
by two-dimensional visual cortical filters. Optical Society of America, Journal, A: Optics and Image
Science, 2:1160?1169, 1985.
9
| 5152 |@word determinant:1 exploitation:9 middle:1 polynomial:1 norm:12 faculty:1 nd:1 suitably:2 km:1 d2:1 decomposition:1 pick:5 dramatic:1 incurs:1 asks:1 thereby:1 tr:1 harder:1 carry:1 reduction:1 score:5 rkhs:12 outperforms:3 existing:1 freitas:2 ka:3 si:17 bd:3 must:2 lorentz:1 numerical:2 confirming:1 shape:1 reproducible:1 resampling:1 intelligence:1 isotropic:4 volkan:2 provides:2 branin:1 along:5 constructed:1 c2:4 mathematical:1 prove:4 artner:1 combine:2 fitting:1 underfitting:1 introduce:1 manner:2 x0:3 theoretically:1 snoek:1 indeed:1 notably:1 expected:1 multi:3 inspired:1 decreasing:2 curse:1 armed:4 increasing:4 estimating:2 underlying:2 moreover:5 maximizes:1 bounded:6 notation:2 competes:1 what:4 msee:1 substantially:1 spends:1 finding:1 extremum:1 guarantee:6 formalizes:1 every:1 ti:1 tackle:2 exactly:1 k2:3 demonstrates:1 control:2 unit:1 positive:1 engineering:1 aat:1 sd:2 analyzing:1 meet:1 might:3 studied:1 dantzig:2 suggests:1 challenging:1 shaded:1 co:7 acknowledgment:1 practical:1 practice:1 regret:41 ker:1 area:2 snf:4 empirical:1 eth:2 significantly:1 projection:2 confidence:4 word:1 get:2 cannot:2 close:4 selection:1 operator:1 put:1 context:1 impossible:1 live:1 collapsed:1 optimize:8 maxz:1 deterministic:1 yt:2 center:6 urich:2 starting:1 convex:3 resolution:1 recovery:8 rule:1 insight:1 spanned:2 nuclear:1 dominate:1 stability:1 notion:3 coordinate:2 justification:1 pt:1 suppose:1 controlling:1 user:1 exact:6 gps:1 us:1 trick:1 element:1 expensive:4 mukherjee:1 preprint:1 solved:1 wang:2 sun:1 trade:3 decrease:4 highest:1 ran:1 mentioned:1 complexity:7 josipd:1 ultimately:1 depend:5 weakly:1 tight:1 predictive:2 dilemma:1 efficiency:2 f2:1 basis:1 darpa:1 joint:1 differently:1 america:1 fast:1 effective:1 query:2 approached:1 tell:1 artificial:1 hyper:1 choosing:3 whose:1 quite:1 spend:1 supplementary:1 compressed:1 statistic:1 gp:28 transform:2 noisy:12 itself:1 final:1 online:2 gait:2 propose:1 product:3 maximal:1 aligned:4 matheson:1 achieve:3 frobenius:1 kh:3 olkopf:1 convergence:3 ijcai:2 optimum:2 negotiate:1 adam:1 gab:4 derive:1 lowrank:1 received:1 strong:1 come:1 larochelle:1 quantify:1 qd:1 direction:6 radius:1 filter:3 stochastic:3 exploration:9 human:1 material:1 require:2 suffices:2 fornasier:1 stretch:1 hold:5 around:2 considered:4 sufficiently:1 exp:1 mo:1 arrange:1 optimizer:1 continuum:1 resample:2 estimation:5 proc:3 robbins:1 successfully:1 tool:1 hope:1 cora:1 offs:1 mit:1 gaussian:12 tyagi:2 pn:1 zhou:1 varying:1 encode:1 ax:7 focus:1 improvement:1 rank:16 bernoulli:1 zoghi:1 contrast:1 adversarial:1 greedily:1 baseline:3 sense:1 stg:1 helpful:1 inference:1 seeger:1 dependent:1 lizotte:1 epfl:2 accumulated:1 typically:1 inaccurate:1 hidden:1 bandit:12 relation:1 arg:1 issue:1 subexponential:4 orientation:1 chau:1 vybiral:1 plan:1 art:1 spatial:1 equal:2 once:2 construct:1 sampling:9 yu:1 icml:1 anticipated:1 djolonga:1 future:2 stimulus:1 hint:1 few:3 modern:3 cheaper:1 phase:4 microsoft:1 ab:1 evaluation:1 alignment:1 analyzed:1 wedin:2 admitting:1 semidefinite:1 behind:1 chain:1 allocating:1 ambient:4 accurate:2 beforehand:1 closer:1 edge:1 necessary:1 respective:1 orthogonal:4 unless:2 stoltz:1 taylor:1 theoretical:8 josip:1 minimal:1 cevher:4 increased:1 hutter:1 modeling:1 stewart:1 measuring:1 calibrate:1 bull:1 cost:2 fusing:1 deviation:1 subset:1 uniform:1 successful:1 pal:1 varies:4 corrupted:2 synthetic:2 calibrated:2 combined:1 st:2 international:1 ifip:1 off:1 kdimensional:1 picking:1 analogously:2 squared:2 reflect:1 satisfied:2 abbasi:1 choose:3 possibly:2 worse:1 expert:1 derivative:4 american:2 li:1 de:2 bergstra:1 summarized:1 matter:1 explicitly:1 depends:3 performed:1 root:1 break:1 closed:1 analyze:4 kwk:1 recover:3 candes:1 kgk2:1 contribution:1 minimize:1 convolutional:1 variance:5 efficiently:1 identify:1 directional:1 dmx:1 bayesian:15 identification:1 accurately:1 notoriously:2 history:1 randoms:5 distort:1 definition:2 failure:1 against:1 frequency:1 hereby:1 associated:1 proof:2 recovers:1 mi:2 sampled:5 gain:1 radially:3 popular:2 knowledge:2 dimensionality:2 hilbert:3 carefully:4 actually:1 back:1 auer:1 focusing:1 brochu:1 originally:1 dt:1 response:1 modal:1 maximally:1 devore:1 done:1 shrink:1 generality:1 wildly:1 just:1 stage:1 smola:1 d:7 hand:1 receives:1 maximizer:1 quality:4 perhaps:1 scientific:1 grows:1 effect:5 concept:1 true:11 hence:4 regularization:1 round:3 sin:1 during:2 bowling:1 cosine:4 suboptimality:1 outline:1 theoretic:1 demonstrate:2 performs:1 image:2 harmonic:1 ktrue:1 recently:3 ari:1 rotation:3 raskutti:1 empirically:1 overview:1 exponentially:2 volume:2 belong:1 tail:1 interpretation:1 association:1 surpassing:1 theirs:1 schnass:1 measurement:2 queried:1 smoothness:2 rd:3 approx:1 grid:2 automatic:1 similarly:1 erc:2 mathematics:2 had:2 access:1 stable:1 specification:1 surface:1 etc:1 add:1 curvature:2 posterior:6 recent:2 perspective:1 optimizing:3 belongs:2 scenario:2 verlag:1 inequality:1 life:1 yi:1 integrable:1 additional:3 care:1 fortunately:2 surely:1 determine:2 maximize:1 recommended:1 signal:2 july:1 relates:1 full:3 multiple:1 d7:1 reduces:1 infer:2 smooth:3 technical:1 academic:1 sphere:1 long:1 y:3 controlled:1 impact:1 regression:4 noiseless:5 metric:2 arxiv:2 iteration:2 kernel:20 tailored:1 represent:1 achieved:2 preserved:1 background:1 addition:1 krause:4 fellowship:1 addressed:1 szepesv:1 singular:4 crucial:2 appropriately:1 allocated:1 sch:1 subject:1 deficient:1 effectiveness:1 call:1 structural:1 near:1 leverage:1 ideal:1 split:1 bengio:1 embeddings:1 rendering:1 reacts:1 affect:1 zi:1 andreas:1 idea:2 inner:1 imperfect:1 tradeoff:4 allocate:2 fa8650:1 hessian:2 york:1 repeatedly:1 generally:1 detailed:1 tune:1 amount:1 nonparametric:1 daugman:1 svms:1 reduced:2 generate:1 tutorial:1 estimated:4 trapped:1 per:1 neuroscience:1 track:1 mat:3 key:1 nevertheless:1 d3:1 carpentier:1 concreteness:1 convert:1 run:7 angle:2 inverse:1 uncertainty:4 distorted:1 family:2 almost:1 wu:1 appendix:1 bit:1 bound:35 completing:1 pay:1 oracle:5 optic:1 dominated:1 kleinberg:1 fourier:2 answered:1 aspect:1 span:1 optical:1 kgkh:2 relatively:1 ern:3 ball:2 kd:2 slightly:1 kakade:1 deferring:1 happens:2 ckus:1 castro:1 intuitively:1 invariant:1 gathering:1 equation:3 remains:1 kw0:1 turn:2 know:3 end:1 apply:2 observe:2 hierarchical:1 spectral:1 yadkori:1 rucb:4 top:1 dirichlet:1 cf:3 remaining:2 include:1 log2:1 exploit:1 classical:2 society:2 seeking:1 objective:2 question:1 quantity:4 already:1 added:1 parametric:1 dependence:3 rt:11 usual:1 surrogate:1 exhibit:1 gradient:3 subspace:48 mx:18 distance:3 separate:1 misestimation:1 w0:1 manifold:1 aibo:1 trivial:1 length:1 illustration:2 providing:1 difficult:1 unfortunately:3 statement:1 stoc:1 gk:1 intent:1 suppress:1 design:2 zt:3 unknown:3 upper:5 recommender:1 observation:10 neuron:1 datasets:1 conversion:1 benchmark:4 finite:4 incorrectly:1 extended:1 precise:2 perturbation:2 reproducing:1 arbitrary:1 inferred:1 introduced:1 pair:1 specified:1 slivkins:1 connection:1 optimized:1 learned:2 nip:1 tractably:1 address:2 able:3 suggested:1 proceeds:1 usually:2 beyond:2 mismatch:1 appeared:1 sparsity:2 challenge:4 max:2 including:1 wainwright:1 unrealistic:2 suitable:3 natural:4 circumvent:1 residual:1 minimax:2 older:1 improve:1 axis:2 acknowledges:2 extract:1 prior:1 nice:1 disagrees:1 kf:3 loss:1 sublinear:3 allocation:1 proven:1 ingredient:1 foundation:1 upfal:1 incurred:4 krausea:1 degree:1 row:4 summary:1 supported:1 last:1 free:2 enjoys:3 allow:1 weaker:1 wide:1 taking:1 bulletin:1 munos:2 sparse:1 distributed:1 benefit:1 plain:1 dimension:25 evaluating:1 cumulative:15 resides:1 computes:1 crsii2:1 author:1 concretely:1 reinforcement:1 cortical:1 far:4 polynomially:2 log3:2 transaction:3 approximate:3 compact:4 selector:2 obtains:1 global:2 sequentially:1 robotic:1 active:2 assumed:1 excite:1 xi:2 continuous:6 latent:1 prod:2 quantifies:2 search:1 additionally:1 learn:4 szepesvari:1 schuurmans:1 alg:1 necessarily:1 domain:5 aistats:1 main:3 linearly:2 bounding:1 noise:19 whole:1 sliced:1 precision:1 sub:1 originated:1 wish:2 exponential:4 weighting:1 learns:1 admissible:1 rk:4 theorem:6 down:1 specific:1 xt:3 subspace2:1 sensing:1 decay:1 experimented:1 evidence:1 fusion:1 exists:1 intractable:1 weakest:1 albeit:1 sequential:1 effectively:3 importance:1 corr:1 linearization:1 execution:1 budget:1 conditioned:1 kx:1 chen:1 easier:1 entropy:1 likely:1 bubeck:1 visual:2 bo:22 doubling:1 applies:2 springer:2 ch:3 aa:6 succeed:1 goal:4 consequently:3 rbf:3 towards:1 lipschitz:4 feasible:1 hard:3 change:1 determined:1 reducing:2 uniformly:4 principal:1 lemma:3 called:1 total:3 experimental:2 svd:2 ucb:41 indicating:1 formally:2 select:1 support:3 ethz:2 constructive:1 evaluate:2 d1:1 srinivas:1 handling:1 |
4,590 | 5,153 | On Poisson Graphical Models
Eunho Yang
Department of Computer Science
University of Texas at Austin
[email protected]
Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
[email protected]
Genevera I. Allen
Department of Statistics and
Electrical & Computer Engineering
Rice University
[email protected]
Zhandong Liu
Department of Pediatrics-Neurology
Baylor College of Medicine
[email protected]
Abstract
Undirected graphical models, such as Gaussian graphical models, Ising, and
multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables. These standard
instances, however, are ill-suited to modeling count data, which are increasingly
ubiquitous in big-data settings such as genomic sequencing data, user-ratings data,
spatial incidence data, climate studies, and site visits. Existing classes of Poisson
graphical models, which arise as the joint distributions that correspond to Poisson distributed node-conditional distributions, have a major drawback: they can
only model negative conditional dependencies for reasons of normalizability given
its infinite domain. In this paper, our objective is to modify the Poisson graphical model distribution so that it can capture a rich dependence structure between
count-valued variables. We begin by discussing two strategies for truncating the
Poisson distribution and show that only one of these leads to a valid joint distribution. While this model can accommodate a wider range of conditional dependencies, some limitations still remain. To address this, we investigate two additional
novel variants of the Poisson distribution and their corresponding joint graphical
model distributions. Our three novel approaches provide classes of Poisson-like
graphical models that can capture both positive and negative conditional dependencies between count-valued variables. One can learn the graph structure of
our models via penalized neighborhood selection, and we demonstrate the performance of our methods by learning simulated networks as well as a network from
microRNA-sequencing data.
1
Introduction
Undirected graphical models, or Markov random fields (MRFs), are a popular class of statistical
models for representing distributions over a large number of variables. These models have found
wide applicability in many areas including genomics, neuroimaging, statistical physics, and spatial
statistics. Popular instances of this class of models include Gaussian graphical models [1, 2, 3, 4],
used for modeling continuous real-valued data, the Ising model [3, 5], used for modeling binary
data, as well as multinomial graphical models [6] where each variable takes values in a small finite
set. There has also been recent interest in non-parametric extensions of these models [7, 8, 9, 10].
None of these models however are best suited to model count data, where the variables take values
in the set of all positive integers. Examples of such count data are increasingly ubiquitous in big-data
1
settings, including high-throughput genomic sequencing data, spatial incidence data, climate studies,
user-ratings data, term-document counts, site visits, and crime and disease incidence reports.
In the univariate case, a popular choice for modeling count data is the Poisson distribution. Could
we then model complex multivariate count data using some multivariate extension of the Poisson
distribution? A line of work [11] has focused on log-linear models for count data in the context of
contingency tables, however the number of parameters in these models grow exponentially with the
number of variables and hence, these are not appropriate for high-dimensional regimes with large
numbers of variables. Yet other approaches are based on indirect copula transforms [12], as well as
multivariate Poisson distributions that do not have a closed, tractable form, and relying on limiting
results [13]. Another important approach defines a multivariate Poisson distribution by modeling
node variables as sums of independent Poisson variables [14, 15]. Since the sum of independent
Poisson variables is Poisson as well, this construction yields Poisson marginal distributions. The
resulting joint distribution, however, becomes intractable to characterize with even a few variables
and moreover, can only model positive correlations, with further restrictions on the magnitude of
these correlations. Other avenues for modeling multivariate count-data include hierarchical models
commonly used in spatial statistics [16].
In a qualitatively different line of work, Besag [17] discusses a tractable and natural multivariate
extension of the univariate Poisson distribution; while this work focused on the pairwise model
case, Yang et al. [18, 19] extended this to the general graphical model setting. Their construction
of a Poisson graphical model (PGM) is simple. Suppose all node-conditional distributions, the
conditional distribution of a node conditioned on the rest of the nodes, are univariate Poisson. Then,
there is a unique joint distribution consistent with these node-conditional distributions, and moreover
this joint distribution is a graphical model distribution that factors according to a graph specified
by the node-conditional distributions. While this graphical model seems like a good candidate to
model multivariate count data, there is one major defect. For the density to be normalizable, the edge
weights specifying the Poisson graphical model distribution have to be non-positive. This restriction
implies that a Poisson graphical model distribution only models negative dependencies, or so called
?competitive? relationships among variables. Thus, such a Poisson graphical model would have
limited practical applicability in modeling more general multivariate count data [20, 21], with both
positive and negative dependencies among the variables.
To address this major drawback of non-positive conditional dependencies of the Poisson MRF,
Kaiser and Cressie [20], Griffith [21] have suggested the use of the Winsorized Poisson distribution. This is the univariate distribution obtained by truncating the integer-valued Poisson random
variable at a finite constant R. Specifically, they propose the use of this Winsorized Poisson as nodeconditional distributions, and assert that there exists a consistent joint distribution by following the
construction of [17]. Interestingly, we will show that their result is incorrect and this approach can
never lead to a consistent joint distribution in the vein of [17, 18, 19]. Thus, there currently does not
exist a graphical model distribution for high-dimensional multivariate count data that does not suffer
from severe deficiencies. In this paper, our objective is to specify a joint graphical model distribution
over the set of non-negative integers that can capture rich dependence structures between variables.
The major contributions of our paper are summarized as follows: We first consider truncated Poisson
distributions and (1) show that the approach of [20] is NOT conducive to specifying a joint graphical
model distribution; instead, (2) we propose a novel truncation approach that yields a proper MRF
distribution, the Truncated PGM (TPGM). This model however, still has certain limitations on the
types of variables and dependencies that may be modeled, and we thus consider more fundamental
modifications to the univariate Poisson density?s base measure and sufficient statistics. (3) We will
show that in order to have both positive and negative conditional dependencies, the requirements
of normalizability are that the base measure of the Poisson density needs to scale quadratically for
linear sufficient statistics. This leads to (4) a novel Quadratic PGM (QPGM) with linear sufficient
statistics and its logical extension, (5) the Sublinear PGM (SPGM) with sub-linear sufficient statistics that permit sub-quadratic base measures. Our three novel approaches for the first time specify
classes of joint graphical models for count data that permit rich dependence structures between variables. While the focus of this paper is model specification, we also illustrate how our models can be
used to learn the network structure from iid samples of high-dimensional multivariate count data via
neighborhood selection. We conclude our work by demonstrating our models on simulated networks
and by learning a breast cancer microRNA expression network form count-valued next generation
sequencing data.
2
2
Poisson Graphical Models & Truncation
Poisson graphical models were introduced by [17] for the pairwise case, where they termed
these ?Poisson auto-models?; [18, 19] provide a generalization to these models. Let X =
(X1 , X2 , . . . , Xp ) be a p-dimensional random vector where the domain X of each Xs is
{0, 1, 2, . . .}; and let G = (V, E) be an undirected graph over p nodes corresponding to the p
variables. The pairwise Poisson graphical model (PGM) distribution over X is then defined as
X
X
P (X) = exp
(?s Xs ? log(Xs !)) +
?st Xs Xt ? A(?) .
(1)
s?V
(s,t)?E
It can be seen that the node-conditional distributions for the above distribution are given by
P (Xs |XV \s ) = exp{?s Xs ? log(XP
s !) ? exp(?s )}, which is a univariate Poisson distribution with
parameter ? = exp(?s ) = exp(?s + t?N (s) ?st Xt ), and where N (s) is the neighborhood of node
s according to graph G.
As we have noted, there is a major drawback with this Poisson graphical model distribution. Note
that the domain of parameters ? of the distribution
in (1) are specified by the normalizability condinP
o
P
P
tion A(?) < +?, where A(?) := log X p exp
s?V (?s Xs ?log(Xs !))+
(s,t)?E ?st Xs Xt .
Proposition 1 (See [17]). Consider the Poisson graphical model distribution in (1). Then, for any
parameter ?, A(?) < +? only if the pairwise parameters are non-positive: ?st ? 0 for (s, t) ? E .
The above proposition asserts that the Poisson graphical model in (1) only allows negative edgeweights, and consequently can only capture negative conditional relationships between variables.
Thus, even though the Poisson graphical model is a natural extension of the univariate Poisson
distribution, it entails a highly restrictive parameter space with severely limited applicability. The
objective of this paper, then, is to arrive at a graphical model for count data that would allow relaxing
these restrictive assumptions, and model both positively and negatively correlated variables.
2.1
Truncation, Winsorization, and the Poisson Distribution
The need for finiteness of A(?) imposes a negativity constraint on ? because of the countably infinite
domain of the random variables. A natural approach to address this would then be to truncate the
domain of the Poisson random variables. In this section, we will investigate the two natural ways in
which to do so and discuss their possible graphical model distributions.
2.1.1
A Natural Truncation Approach
Kaiser and Cressie [20] first introduced an approach to truncate the Poisson distribution in the context of graphical models. Suppose Z 0 is Poisson with parameter ?. Then, one can define what they
termed a Winsorized Poisson random variable Z as follows: Z = I(Z 0 < R)Z 0 + I(Z 0 ? R)R,
where I(A) is an indicator function, and R is a fixed positive constant denoting the truncation
level. The probability mass
variable, P (Z;
Z function of this truncated
Poisson
?, R), can then
PR?1 ?i
?
be written as I(Z < R) Z! exp(??) + I(Z = R) 1 ? i=0 i! exp(??) . Now consider
the use of this Winsorized Poisson distribution for node-conditional distributions, P (Xs |XV \s ):
Xs
PR?1 ?k
?
I(Xs < R) Xss ! exp(??s ) + I(Xs = R) 1 ? k=0 k!s exp(??s ) , where ?s = exp(?s ) =
P
exp ?s + t?N (s) ?st Xt . By the Taylor series expansion of the exponential function, this distribution can be expressed in a form reminiscent of the exponential family,
n
o
P (Xs |XV \s ) = exp ?s Xs ? log(Xs !) + I(Xs = R)?(?s ) ? exp(?s ) ,
(2)
n
o
P? exp(k?s )
R!
where ?(?s ) is defined as log exp(R?
.
k=R
k!
s)
We now have the machinery to describe the development in [20] of a Winsorized Poisson graphical
model. Specifically, Kaiser and Cressie [20] assert in a Proposition of their paper that there is a valid
joint distribution consistent with these Winsorized Poisson node-conditional distributions above.
However, in the following theorem, we prove that such a joint distribution can never exist.
3
Theorem 1. Suppose X = (X1 , . . . , Xp ) is a p-dimensional random vector with domain
{0, 1, ..., R}p where R > 3. Then there is no joint distribution over X such that the corresponding
node-conditional distributions P (Xs |XV \s), of a node conditioned on the rest of the nodes, have
the form specified as P (Xs |XV \s ) ? exp E(XV \s )Xs ? log(Xs !) + I(Xs = R)? E(XV \s ) ,
where E(XV \s ), the canonical exponential family parameter, can be an arbitrary function.
Theorem 1 thus shows that we cannot just substitute the Winsorized Poisson distribution in the
construction of [17, 18, 19] to obtain a Winsorized variant of Poisson graphical models.
2.1.2
A New Approach to Truncation
It is instructive to study the probability mass function of the univariate Winsorized Poisson distribution in (2). The ?remnant? probability mass of the Poisson distribution for the cases where X > R,
was all moved to X = R. In the process, it is no longer an exponential family, a property that is
crucial for compatibility with the construction in [17, 18, 19]. Could we then derive a truncated
Poisson distribution that still belongs to the exponential family? It can be seen that the following distribution over a truncated Poisson variable Z ? X = {0, 1, . . . , R} fits the bill perfectly:
P (Z) = P exp{?Z?log(Z!)}
. The random variable Z here is another natural truncated Poisson
k?X exp{?k?log(k!)}
variant, where the ?remnant? probability mass for the cases where X > R was distributed to all the
remaining events X ? R. It can be seen that this distribution also belongs to the exponential family.
A natural strategy would then be to use this distribution as the node-conditional distributions in the
construction of [17, 18]:
n
o
P
exp ?s + t?N (s) ?st Xt Xs ? log(Xs !)
n
o.
P (Xs |XV \s ) = P
(3)
P
exp
?
+
?
X
k
?
log(k!)
s
st
t
k?X
t?N (s)
Theorem 2. Suppose X = (X1 , X2 , . . . , Xp ) be a p-dimensional random vector, where each variable Xs for s ? V takes values in the truncated positive integer set, {0, 1, ..., R}, where R is a fixed
positive constant. Suppose its node-conditional distributions are specified as in (3), where the nodeneighborhoods are as specified by a graph G. Then, there exists a unique joint distribution that is
consistent with these node-conditional distributions, and moreover this
Pdistribution belongs to the
graphical model represented by G, with the form: P (X) := exp
s?V (?s Xs ? log(Xs !)) +
P
?
X
X
?
A(?)
,
where
A(?)
is
the
normalization
constant.
st
s
t
(s,t)?E
We call this distribution the Truncated Poisson graphical model (TPGM) distribution. Note that it is
distinct from the original Poisson distribution (1); in particular its normalization constant involves
a summation over finitely many terms. Thus, no restrictions are imposed on the parameters for the
normalizability of the distribution. Unlike the original Poisson graphical model, the TPGM can
model both positive and negative dependencies among its variables.
There are, however, some drawbacks to this graphical model distribution. First, the domain of the
variables is bounded a priori by the distribution specification, so that it is not broadly applicable
to arbitrary, and possibly infinite, count-valued data. Second, problems arise when the random
variables take on large count values close to R. In particular by examining (3), one can see that
when Xt is large, the mass over Xs values get pushed towards R; thus, this truncated version is not
always close to that of the original Poisson density. Therefore, as the truncation value R increases,
the possible values that the parameters ? can take become increasingly negative or close to zero to
prevent all random variables from always taking large count values at the same time. This can be
seen as if we take R ? ?, we arrive at the original PGM and negativity constraints. In summary,
the TPGM approach offers some trade-offs between the value of R, it more closely follows the
Poisson density when R is large, and the types of dependencies permitted.
3
A New Class of Poisson Variants and Their Graphical Model Distributions
As discussed in the previous section, taking a Poisson random variable and truncating it may be a
natural approach but does not lead to a valid multivariate graphical model extension, or does so with
some caveats. Accordingly in this section, we investigate the possibility of modifying the Poisson
distribution more fundamentally, by modifying its sufficient statistic and base measure.
4
Let us first briefly review the derivation of a Poisson graphical model as the graphical model extension of a univariate exponential family distribution from [17, 18, 19]. Consider a general univariate
exponential family distribution, for a random variable Z: P (Z) = exp(?B(Z) ? C(Z) ? D(?)),
where B(Z) is the exponential family sufficient statistic, ? ? R is the parameter, C(Z) is the base
measure, and D(?) is the log-partition function. Suppose the node-conditional distributions are all
specified by the above exponential family,
? V \s )},
P (Xs |XV \s ) = exp{E(XV \s ) B(Xs ) + C(Xs ) ? D(X
(4)
where the canonical parameter of exponential family is some function E(?) on the rest of the vari?
ables XV \s (and hence so is the log-normalization constant D(?)).
Further, suppose the corresponding joint distribution factors according to the graph G, with the factors over cliques of size at most
k. Then, Proposition 2 in [18], shows that there exists a unique joint distribution corresponding to
the node-conditional distributions in (4). With clique
at most two, this joint distri P factors of size kP
?
B(X
)
+
bution takes the following form: P (X) = exp
s
s?V s
(s,t)?E ?st B(Xs )B(Xt ) ?
P
s?V C(Xs ) ? A(?) . Note that although the log partition function A(?) is usually computa?
tionally intractable, the log-partition function D(?)
of its node-conditional distribution (4) is still
tractable, which allows consistent graph structure recovery [18]. Also note that the original Poisson graphical model (1) discussed in Section 2 can be derived from this construction with sufficient
statistics B(X) = X, and base measure C(X) = log(X!).
3.1
A Quadratic Poisson Graphical Model
As noted in Proposition 1, the normalizability of this Poisson graphical model distribution, however,
requires that the pairwise parameters be negative. A closer look at
1 shows
Pthe proof of Proposition
P
that a key driver of the result is that the base measure terms s?V C(Xs ) =
log(X
s !)
s?V
scale more slowly than the quadratic pairwise terms Xs Xt . Accordingly, we consider the following
general distribution over count-valued variables:
P (Z) = exp(?Z ? C(Z) ? D(?)),
(5)
which has the same sufficient statistics as the Poisson, but a more general base measure C(Z),
for some function C(?). The following theorem shows that for normalizability of the resulting
graphical model distribution with possibly positive edge-parameters, the base measure cannot be
sub-quadratic:
Theorem 3. Suppose X = (X1 , . . . , Xp ) is a count-valued random vector, with joint distribution
given by the graphical model extension of the univariate distribution in (5) which follows the construction of [17, 18, 19]). Then, if the distribution is normalizable so that A(?) < ? for ? 6? 0, it
necessarily holds that C(Z) = ?(Z 2 ).
The previous theorem thus suggests using the ?Gaussian-esque? quadratic base measure C(Z) =
Z 2 , soPthat we would
over count-valued vectors, P (X) =
P obtain the followingPdistribution
2
exp
?
X
+
?
X
X
?
c
X
?
A(?)
. for some fixed positive constant
s
s?V s s
(s,t)?E st s t
s?V
c > 0. We consider the following generalization of the above distribution:
X
X
X
2
P (X) = exp
?s Xs +
?st Xs Xt +
?ss Xs ? A(?) .
(6)
s?V
(s,t)?E
s?V
We call this distribution the Quadratic Poisson Graphical Model (QPGM). The following proposition
shows that the QPGM is normalizable while permitting both positive and negative edge-parameters.
Proposition 2. Consider the distribution in (6). Suppose we collate the quadratic term parameters
into a p ? p matrix ?. Then the distribution is normalizable provided the following condition holds:
There exists a positive constant c? , such that for all X ? Wp , X T ?X ? ?c? kXk22 .
The condition in the proposition would be satisfied provided that the pairwise parameters are pointwise negative: ? < 0, similar to the original Poisson graphical model. Alternatively, it is also
sufficient for the pairwise parameter matrix to be negative-definite: ? ? 0, which does allow for
positive and negative dependencies, as in the Gaussian distribution.
A possible drawback with this distribution is that due to the quadratic base measure, the QPGM
has a Gaussian-esque thin tail. Even though the domains of Gaussian and QPGM are distinct,
5
P
their densities have similar behaviors and shapes as long as ?s + t?N (s) ?st Xt ? 0. Indeed,
the Gaussian log-partition function serves as a variational upper bound for the QPGM. Specifically,
under the restriction that ?ss < 0, we arrive at the following upper bound:
D(?; XV \s ) = log
X
exp ?s Xs + ?ss Xs2 ? log
Z
exp ?s Xs + ?ss Xs2 dXs
Xs ?R
Xs ?W
= DGauss (?; X\s ) = 1/2 log 2? ? 1/2 log(?2?ss ) ?
X
1
(?s +
?st Xt )2 ,
4?ss
t?N (s)
by relating to the log-partition function of a node-conditional Gaussian distribution. Thus, nodewise regressions according to the QPGM via the above variational upper bound on the partition
function would behave similarly to that of a Gaussian graphical model.
3.2
A Sub-Linear Poisson Graphical Model
From the previous section, we have learned that so long as we have linear sufficient statistics,
B(X) = X, we must have a base measure that scales at least quadratically, C(Z) = ?(Z 2 ),
for a Poisson-based graphical model (i) to permit both positive and negative conditional dependencies and (ii) to ensure normalizability. Such a quadratic base measure however results in a
Gaussian-esque thin tail, while we would like to specify a distribution with possibly heavier tails
than those of QPGM. It thus follows that we would need to control the linear Poisson sufficient
statistics B(X) = X itself. Accordingly, we consider the following univariate distribution over
count-valued variables:
P (Z) = exp(?B(Z; R0 , R) ? log Z! ? D(?, R0 , R)),
(7)
which has the same base measure C(Z) = log Z! as the Poisson, but with the following sub-linear
sufficient statistics:
?
if x ? R0
? x
R02
1
R
2
? 2(R?R
x
+
x
?
if
R0 < x ? R
B(x; R0 , R) =
R?R0
2(R?R0 )
? R+R0 0 )
if x ? R
2
We depict this sublinear statistic in Figure 3 in the appendix; Up to R0 , B(x) increases linearly,
however, after R0 its slope decreases linearly and becomes zero at R.
The following theorem shows the normalizability of the SPGM:
Theorem 4. Suppose X = (X1 , . . . , Xp ) is a count-valued random vector, with joint distribution
given by the graphical model extension of the univariate distribution in (7) (following the construction [17, 18, 19]):
P (X) = exp
X
s?V
X
X
?s B(Xs ; R0 , R) +
?st B(Xs ; R0 , R)B(Xt ; R0 , R) ?
log(Xs !) ? A(?, R0 , R) .
s?V
(s,t)?E
This distribution is normalizable, so that A(?) < ? for all pairwise parameters ?st ? R; (s, t) ? E.
On comparing with the QPGM, the SPGM has two distinct advantages: (1) it has a heavier tails
with milder base measures as seen in its motivation, and (2) allows a broader set of feasible pairwise
parameters (actually for all real values) as shown in Theorem 4.
The log-partition function D(?, R0 , R) of node-conditional SPGM involves the summation over infinite terms, and hence usually does not have a closed-form. The log-partition function of traditional
univariate Poisson distribution, however, can serve as a variational upper bound:
Proposition 3. Consider the node-wise conditional distributions in (7). If ? ? 0, we obtain the
following upper bound:
D(?, R0 , R) ? DPois (?) = exp(?).
4
Numerical Experiments
While the focus of this paper is model specification, we can learn our models from iid samples of
count-valued multivariate vectors using neighborhood selection approaches as suggested in [1, 5,
6
?
?
0.1
0.2
0.3
0.4
0.2
0.3
1.0
0.8
0.6
0.4
True Positive Rate
0.5
?
?
?
?
?
0.0
0.1
0.2
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
0.3
0.4
Karlis: Scale?free, n=50, p = 100
0.2
0.3
False Positive Rate
0.4
0.5
?
?
?
?
?
0.0
0.1
0.2
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
0.3
False Positive Rate
0.4
0.8
0.0
0.4
?
?
?
0.6
?
0.5
?
?
?
?
?
?
?
?
0.4
?
?
?
0.5
?
????
? ??
? ?
True Positive Rate
0.6
????
0.2
0.8
True Positive Rate
? ??
? ?
0.0
?
?
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
?
?
0.2
0.8
0.6
?
0.1
?
?
?
1.0
Karlis: Hub, n=50, p = 100
1.0
TPGM: Hub, n=50, p = 100
?
?
?
0.4
?
?
?
?
0.2
0.8
0.1
?
False Positive Rate
0.4
True Positive Rate
0.6
0.0
? ? ? ???? ?
?
?
?
?
False Positive Rate
?
0.0
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
?
?
?
?
False Positive Rate
?
0.2
0.0
?
?
0.5
1.0
0.0
?
?
?
0.0
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
?
?
Karlis: Scale?free, n=200, p = 50
?
0.0
0.4
0.2
?
?
?
?
? ? ? ????
?
?
0.4
True Positive Rate
0.6
?
0.2
0.8
?
?
?
?
?
?
0.0
True Positive Rate
1.0
Karlis: Hub, n=200, p = 50
1.0
TPGM: Hub, n=200, p = 50
?
?
?
?
?
0.0
0.1
0.2
SPGM
TPGM
Glasso
NPN?Copula
NPN?Skeptic
0.3
0.4
0.5
False Positive Rate
Figure 1: ROC curves for recovering the true network structure of count-data generated by the
TPGM distribution or by [15] (sums of independent Poissons method) for both standard and highdimensional regimes. Our TPGM and SPGM M -estimators are compared to the graphical lasso [4],
the non-paranormal copula-based method [7] and the non-paranormal SKEPTIC estimator [10].
6, 18]. Specifically, we maximize the `1 penalized node-conditional likelihoods for our TPGM,
QPGM and SPGM models using proximal gradient ascent. Also, as our models are constructed in
the framework of [18, 19], we expect extensions of their sparsistency analysis to confirm that the
network structure of our model can indeed be learned from iid data; due to space limitations, this is
left for future work.
Simulation Studies. We evaluate the comparative performance of our TPGM and SPGM methods
for recovering the true network from multivariate count data. Data of dimension n = 200 samples
and p = 50 variables or the high-dimensional regime of n = 50 samples and p = 100 variables
is generated via the TPGM distribution using Gibbs sampling or via the sums of independent Poissons method of [15]. For the former, edges were generated with both positive and negative weights,
while for the latter, only edges with positive weights can be generated. As we expect the SPGM to be
sparsistent for data generated from the SGPM distribution following the work of [18, 19], we have
chosen to present results for data generated from other models. Two network structures are considered that are commonly used throughout genomics: the hub and scale-free graph structures. We
compare the performance of our TPGM and SPGM methods with R set to the maximum count value
to Gaussian graphical models [4], the non-paranormal [7], and the non-paranormal SKEPTIC [10].
In Figure 1, ROC curves computed by varying the regularization parameter, and averaged over 50
replicates are presented for each scenario. Both TPGM and SPGM have superior performance for
count-valued data than Gaussian based methods. As expected, the TPGM method has the best results
when data is generated according to its distribution. Additionally, TPGM shows some advantages in
high-dimensional settings. This likely results from a facet of its node-conditional distribution which
places larger mass on strongly dependent count values that are close to R. Thus, the TPGM method
may be better able to infer edges from highly connected networks, such as those considered. Additionally, all methods compared outperform the original Poisson graphical model estimator, given in
[18] (results not shown), as this method can only recover edges with negative weights.
Case Study: Breast Cancer microRNA Networks. We demonstrate the advantages of our graphical models for count-valued data by learning a microRNA (miRNA) expression network from next
generation sequencing data. This data consists of counts of sequencing reads mapped back to a
reference genome and are replacing microarrays, for which GGMs are a popular tool, as the preferred measures of gene expression [22]. Level III data was obtained from the Cancer Genome
Atlas (TCGA) [23] and processed according to techniques described in [24]; this data consists of
n = 544 subjects and p = 262 miRNAs. Note that [18, 24] used this same data set to demonstrate
7
50
50
50
100
100
100
150
150
150
200
200
200
250
250
50
100
150
200
250
250
50
100
150
200
250
50
100
150
200
250
Figure 2: Breast cancer miRNA networks. Network inferred by (top left) TPGM with R = 11 and
by (top right) SPGM with R = 11 and R0 = 5. The bottom row presents adjacency matrices of
inferred networks with that of SPGM occupying the lower triangular portion and that of (left) PGM,
(middle) TPGM with R = 11, and graphical lasso (right) occupying the upper triangular portion.
network approaches for count-data, and thus, we use the same data set so that the results of our novel
methods may be compared to those of existing approaches.
Networks were learned from this data using the original Poisson graphical model, Gaussian graphical models, our novel TPGM approach with R = 11, the maximum count, and our novel SPGM
approach with R = 11 and R0 = 5. Stability selection [25] was used to estimate the sparsity of the
networks in a data-driven manner. Figure 2 depicts the inferred networks for our TPGM and SPGM
methods as well as comparative adjacency matrices to illustrate the differences between our SPGM
method and other approaches. Notice that SPGM and TPGM find similar network structures, but
TPGM seems to find more hub miRNAs. This is consistent with the behavior of the TPGM distribution when strongly correlated counts have values close to R. The original Poisson graphical model,
on the other hand, misses much of the structure learned by the other methods and instead only
finds 14 miRNAs that have major conditionally negative relationships. As most miRNAs work in
groups to regulate gene expression, this result is expected and illustrates a fundamental flaw of the
PGM approach. Compared with Gaussian graphical models, our novel methods for count-valued
data find many more edges and biologically important hub miRNAs. Two of these, mir-375 and
mir-10b, found by both TPGM and SPGM but not by GGM, are known to be key players in breast
cancer [26, 27]. Additionally, our TPGM and SPGM methods find a major clique which consists
of miRNAs on chromosome 19, indicating that this miRNA cluster may by functionally associated
with breast cancer.
Acknowledgments
The authors acknowledge support from the following sources: ARO via W911NF-12-1-0390 and
NSF via IIS-1149803 and DMS-1264033 to E.Y. and P.R; Ken Kennedy Institute for Information
Technology at Rice to G.A. and Z.L.; NSF DMS-1264058 and DMS-1209017 to G.A.; and NSF
DMS-1263932 to Z.L..
8
References
[1] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals
of Statistics, 34:1436?1462, 2006.
[2] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):
19, 2007.
[3] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research, 9:485?
516, 2008.
[4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the lasso. Biostatistics, 9(3):432?441, 2007.
[5] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using `1 regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010.
[6] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS), 14, 2011.
[7] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328, 2009.
[8] A. Dobra and A. Lenkoski. Copula gaussian graphical models and their application to modeling functional
disability data. The Annals of Applied Statistics, 5(2A):969?993, 2011.
[9] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High dimensional semiparametric gaussian
copula graphical models. Arxiv preprint arXiv:1202.2169, 2012.
[10] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. The nonparanormal skeptic. Arxiv preprint
arXiv:1206.6488, 2012.
[11] S. L. Lauritzen. Graphical models, volume 17. Oxford University Press, USA, 1996.
[12] I. Yahav and G. Shmueli. An elegant method for generating multivariate poisson random variable. Arxiv
preprint arXiv:0710.5670, 2007.
[13] A. S. Krishnamoorthy. Multivariate binomial and poisson distributions. Sankhy?a: The Indian Journal of
Statistics (1933-1960), 11(2):117?124, 1951.
[14] P. Holgate. Estimation for the bivariate poisson distribution. Biometrika, 51(1-2):241?287, 1964.
[15] D. Karlis. An em algorithm for multivariate poisson distribution and related models. Journal of Applied
Statistics, 30(1):63?77, 2003.
[16] N. A. C. Cressie. Statistics for spatial data. Wiley series in probability and mathematical statistics, 1991.
[17] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical
Society. Series B (Methodological), 36(2):192?236, 1974.
[18] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur.
Info. Proc. Sys., 25, 2012.
[19] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family
distributions. Arxiv preprint arXiv:1301.4183, 2013.
[20] M. S. Kaiser and N. Cressie. Modeling poisson variables with positive spatial dependence. Statistics &
Probability Letters, 35(4):423?432, 1997.
[21] D. A. Griffith. A spatial filtering specification for the auto-poisson model. Statistics & probability letters,
58(3):245?251, 2002.
[22] J. C. Marioni, C. E. Mason, S. M. Mane, M. Stephens, and Y. Gilad. Rna-seq: an assessment of technical
reproducibility and comparison with gene expression arrays. Genome research, 18(9):1509?1517, 2008.
[23] Cancer Genome Atlas Research Network. Comprehensive molecular portraits of human breast tumours.
Nature, 490(7418):61?70, 2012.
[24] G. I. Allen and Z. Liu. A log-linear graphical model for inferring genetic networks from high-throughput
sequencing data. IEEE International Conference on Bioinformatics and Biomedicine, 2012.
[25] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for high
dimensional graphical models. Arxiv preprint arXiv:1006.3316, 2010.
[26] L. Ma, F. Reinhardt, E. Pan, J. Soutschek, B. Bhat, E. G. Marcusson, J. Teruya-Feldstein, G. W. Bell, and
R. A. Weinberg. Therapeutic silencing of mir-10b inhibits metastasis in a mouse mammary tumor model.
Nature biotechnology, 28(4):341?347, 2010.
[27] P. de Souza Rocha Simonini, A. Breiling, N. Gupta, M. Malekpour, M. Youns, R. Omranipour,
F. Malekpour, S. Volinia, C. M. Croce, H. Najmabadi, et al. Epigenetically deregulated microrna-375
is involved in a positive feedback loop with estrogen receptor ? in breast cancer cells. Cancer research,
70(22):9175?9184, 2010.
9
| 5153 |@word briefly:1 version:1 middle:1 seems:2 simulation:1 covariance:1 accommodate:1 liu:8 series:3 denoting:1 document:1 interestingly:1 nonparanormal:2 genetic:1 existing:2 comparing:1 incidence:3 yet:1 written:1 reminiscent:1 must:1 numerical:1 partition:8 shape:1 atlas:2 depict:1 accordingly:3 sys:1 caveat:1 node:26 mathematical:1 constructed:1 become:1 driver:1 incorrect:1 prove:1 consists:3 yuan:3 manner:1 pairwise:10 inter:1 expected:2 indeed:2 behavior:2 relying:1 becomes:2 begin:1 distri:1 moreover:3 bounded:1 provided:2 mass:6 biostatistics:1 what:1 pediatrics:1 assert:2 computa:1 biometrika:2 control:1 positive:34 engineering:1 modify:1 xv:13 severely:1 receptor:1 oxford:1 meinshausen:1 specifying:2 relaxing:1 suggests:1 limited:2 range:1 averaged:1 unique:3 practical:1 acknowledgment:1 definite:1 area:1 bell:1 griffith:2 get:1 cannot:2 close:5 selection:9 context:2 restriction:4 bill:1 imposed:1 sparsistent:1 truncating:3 focused:2 recovery:1 wasserman:4 estimator:3 array:1 rocha:1 stability:2 poissons:2 limiting:1 annals:3 construction:9 suppose:10 user:2 cressie:5 ising:3 vein:1 bottom:1 preprint:5 electrical:1 capture:4 pradeepr:1 connected:1 trade:1 decrease:1 disease:1 serve:1 negatively:1 joint:20 indirect:1 represented:1 derivation:1 distinct:3 describe:1 kp:1 neighborhood:4 widely:1 valued:15 larger:1 s:6 triangular:2 statistic:25 itself:1 advantage:3 propose:2 aro:1 interaction:1 skeptic:9 loop:1 mirnas:6 pthe:1 reproducibility:1 asserts:1 moved:1 lenkoski:1 r02:1 cluster:1 requirement:1 comparative:2 generating:1 wider:1 illustrate:2 derive:1 finitely:1 lauritzen:1 recovering:2 c:2 zhandong:1 implies:1 involves:2 tcga:1 drawback:5 closely:1 modifying:2 human:1 adjacency:2 gallen:1 generalization:2 proposition:10 summation:2 extension:10 hold:2 considered:2 exp:33 major:7 estimation:5 proc:1 applicable:1 currently:1 pdistribution:1 uhlmann:1 utexas:2 occupying:2 tool:1 offs:1 genomic:2 gaussian:18 always:2 normalizability:8 rna:1 silencing:1 varying:1 broader:1 derived:1 focus:2 mane:1 methodological:1 sequencing:7 likelihood:2 besag:2 milder:1 flaw:1 mrfs:1 dependent:1 el:1 roeder:1 compatibility:1 among:3 ill:1 priori:1 development:1 spatial:8 copula:10 marginal:1 field:1 never:2 sampling:1 look:1 throughput:2 thin:2 sankhy:1 future:1 report:1 sanghavi:1 fundamentally:1 metastasis:1 few:1 mammary:1 comprehensive:1 sparsistency:1 friedman:1 interest:1 investigate:3 highly:2 possibility:1 severe:1 replicates:1 pradeep:1 edge:8 closer:1 machinery:1 taylor:1 instance:2 portrait:1 modeling:10 facet:1 w911nf:1 lattice:1 applicability:3 examining:1 characterize:1 dependency:12 proximal:1 st:15 density:6 fundamental:2 international:1 physic:1 mouse:1 satisfied:1 possibly:3 slowly:1 conf:1 de:1 star:1 summarized:1 tion:1 closed:2 bution:1 competitive:1 recover:1 portion:2 slope:1 contribution:1 ggm:1 correspond:1 yield:2 zhandonl:1 krishnamoorthy:1 iid:3 none:1 kennedy:1 biomedicine:1 involved:1 dm:4 proof:1 associated:1 therapeutic:1 popular:4 logical:1 ubiquitous:2 nodeconditional:1 actually:1 back:1 dobra:1 permitted:1 specify:3 though:2 strongly:2 xs2:2 just:1 correlation:2 hand:1 replacing:1 banerjee:1 assessment:1 defines:1 logistic:1 usa:1 true:8 former:1 hence:3 regularization:3 read:1 wp:1 climate:2 conditionally:1 noted:2 generalized:1 demonstrate:3 allen:4 variational:3 wise:1 novel:9 superior:1 multinomial:2 functional:1 exponentially:1 volume:1 discussed:2 tail:4 relating:1 functionally:1 gibbs:1 ai:1 similarly:1 dxs:1 tionally:1 specification:4 entail:1 longer:1 han:2 base:15 multivariate:17 recent:1 belongs:3 driven:1 termed:2 scenario:1 certain:1 binary:2 discussing:1 seen:5 additional:1 r0:18 maximize:1 ii:2 stephen:1 infer:1 conducive:1 technical:1 offer:1 collate:1 long:2 lin:1 ravikumar:5 visit:2 permitting:1 molecular:1 variant:4 mrf:2 regression:2 breast:7 poisson:83 arxiv:10 normalization:3 gilad:1 cell:1 semiparametric:2 bhat:1 grow:1 finiteness:1 source:1 crucial:1 rest:3 unlike:1 ascent:1 mir:3 subject:1 elegant:1 undirected:4 lafferty:4 integer:4 call:2 yang:4 iii:1 npn:12 variety:1 fit:1 hastie:1 perfectly:1 lasso:4 avenue:1 microarrays:1 texas:2 expression:5 heavier:2 suffer:1 biotechnology:1 remnant:2 transforms:1 processed:1 ken:1 outperform:1 exist:2 canonical:2 nsf:3 notice:1 tibshirani:1 broadly:1 nodewise:1 discrete:1 group:1 key:2 demonstrating:1 groupsparse:1 prevent:1 graph:10 defect:1 sum:4 inverse:1 letter:2 arrive:3 family:11 throughout:1 place:1 seq:1 appendix:1 pushed:1 bound:5 quadratic:10 constraint:2 pgm:8 deficiency:1 normalizable:5 x2:2 ables:1 inhibits:1 department:4 according:6 neur:1 truncate:2 spgm:24 remain:1 increasingly:3 em:1 pan:1 modification:1 biologically:1 pr:2 ghaoui:1 discus:2 count:37 microrna:5 tractable:3 serf:1 permit:3 hierarchical:1 appropriate:1 regulate:1 substitute:1 original:9 top:2 remaining:1 include:2 ensure:1 binomial:1 graphical:71 medicine:1 restrictive:2 society:1 objective:3 kaiser:4 strategy:2 parametric:1 dependence:4 traditional:1 jalali:1 disability:1 tumour:1 gradient:1 mapped:1 simulated:2 estrogen:1 reason:1 modeled:1 relationship:3 eunho:2 pointwise:1 baylor:1 neuroimaging:1 weinberg:1 info:1 negative:19 proper:1 upper:6 markov:1 finite:2 acknowledge:1 behave:1 truncated:9 extended:1 arbitrary:2 souza:1 inferred:3 rating:2 introduced:2 specified:6 crime:1 bcm:1 quadratically:2 learned:4 address:3 able:1 suggested:2 usually:2 regime:3 sparsity:1 including:2 royal:1 wainwright:1 event:1 natural:8 regularized:1 indicator:1 representing:1 kxk22:1 technology:1 categorical:1 negativity:2 auto:2 aspremont:1 genomics:2 review:1 glasso:6 expect:2 sublinear:2 generation:2 limitation:3 filtering:1 contingency:1 vasuki:1 sufficient:12 consistent:7 xp:6 imposes:1 row:1 austin:2 cancer:9 penalized:2 summary:1 truncation:7 free:3 allow:2 institute:1 wide:1 taking:2 mirna:3 sparse:2 distributed:2 curve:2 dimension:1 feedback:1 valid:3 vari:1 rich:3 genome:4 author:1 commonly:2 qualitatively:1 countably:1 preferred:1 gene:3 clique:3 confirm:1 winsorized:9 conclude:1 neurology:1 alternatively:1 continuous:1 table:1 additionally:3 learn:3 chromosome:1 nature:2 expansion:1 complex:1 necessarily:1 domain:8 aistats:1 linearly:2 big:2 motivation:1 arise:2 x1:5 positively:1 site:2 roc:2 depicts:1 wiley:1 sub:5 inferring:1 exponential:12 candidate:1 theorem:10 xt:12 hub:7 mason:1 x:47 gupta:1 bivariate:1 intractable:2 exists:4 false:6 magnitude:1 conditioned:2 illustrates:1 suited:2 univariate:15 likely:1 expressed:1 rice:3 ma:1 conditional:27 consequently:1 towards:1 feasible:1 genevera:1 infinite:4 specifically:4 miss:1 tumor:1 called:1 player:1 indicating:1 college:1 highdimensional:1 ggms:1 support:1 latter:1 bioinformatics:1 indian:1 evaluate:1 instructive:1 correlated:2 |
4,591 | 5,154 | Conditional Random Fields via Univariate
Exponential Families
Eunho Yang
Department of Computer Science
University of Texas at Austin
[email protected]
Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
[email protected]
Genevera I. Allen
Department of Statistics and
Electrical & Computer Engineering
Rice University
[email protected]
Zhandong Liu
Department of Pediatrics-Neurology
Baylor College of Medicine
[email protected]
Abstract
Conditional random fields, which model the distribution of a multivariate response
conditioned on a set of covariates using undirected graphs, are widely used in a
variety of multivariate prediction applications. Popular instances of this class of
models, such as categorical-discrete CRFs, Ising CRFs, and conditional Gaussian based CRFs, are not well suited to the varied types of response variables in
many applications, including count-valued responses. We thus introduce a novel
subclass of CRFs, derived by imposing node-wise conditional distributions of response variables conditioned on the rest of the responses and the covariates as
arising from univariate exponential families. This allows us to derive novel multivariate CRFs given any univariate exponential distribution, including the Poisson,
negative binomial, and exponential distributions. Also in particular, it addresses
the common CRF problem of specifying ?feature? functions determining the interactions between response variables and covariates. We develop a class of tractable
penalized M -estimators to learn these CRF distributions from data, as well as a
unified sparsistency analysis for this general class of CRFs showing exact structure recovery can be achieved with high probability.
1
Introduction
Conditional random fields (CRFs) are a popular class of models that combine the advantages of
discriminative modeling and undirected graphical models. They are widely used across structured
prediction domains such as natural language processing, computer vision, and bioinformatics. The
key idea in this class of models is to represent the joint distribution of a set of response variables
conditioned on a set of covariates using a product of clique-wise compatibility functions. Given an
underlying graph over the response variables, each of these compatibility functions depends on all
the covariates, but only on a subset of response variables within any clique of the underlying graph.
They are thus a discriminative counterpart of undirected graphical models, where we have covariates
that provide information about the multivariate response, and the underlying graph structure encodes
conditional independence assumptions among the responses conditioned on the covariates.
There is a key model specification question that arises, however, in any application of CRFs: how
do we specify the clique-wise sufficient statistics, or compatibility functions (sometimes also called
feature functions), that characterize the conditional graphical model between responses? In par1
ticular, how do we tune these to the particular types of variables being modeled? Traditionally,
these questions have been addressed either by hand-crafted feature functions, or more generally by
discretizing the multivariate response vectors into a set of indicator vectors and then letting the compatibility functions be linear combinations of the product of indicator functions [1]. This approach,
however, may not be natural for continuous, skewed continuous or count-valued random variables.
Recently, spurred in part by applications in bioinformatics, there has been much research on other
sub-classes of CRFs. The Ising CRF which models binary responses, was studied by [2] and extended to higher-order interactions by [3]. Several versions and extensions of Gaussian-based CRFs
have also been proposed [4, 5, 6, 7, 8]. These sub-classes of CRFs, however, are specific to Gaussian
and binary variable types, and may not be appropriate for multivariate count data or skewed continuous data, for example, which are increasingly seen in big-data settings such as high-throughput
genomic sequencing.
In this paper, we seek to (a) formulate a novel subclass of CRFs that have the flexibility to model
responses of varied types, (b) address how to specify compatibility functions for such a family of
CRFs, and (c) develop a tractable procedure with strong statistical guarantees for learning this class
of CRFs from data. We first show that when node-conditional distributions of responses conditioned
on other responses and covariates are specified by univariate exponential family distributions, there
exists a consistent joint CRF distribution, that necessarily has a specific form: with terms that are
tensorial products of functions over the responses, and functions over the covariates.This subclass
of ?exponential family? CRFs can be viewed as a conditional extension of the MRF framework
of [9, 10]. As such, this broadens the class of off-the-shelf CRF models to encompass data that
follows distributions other than the standard discrete, binary, or Gaussian instances. Given this
new family of CRFs, we additionally show that if covariates also follow node-conditional univariate
exponential family distributions, then the functions over features in turn are precisely specified by the
exponential family sufficient statistics. Thus, our twin results definitively answer for the first time
the key model specification question of specifying compatibility or feature functions for a broad
family of CRF distributions. We then provide a unified M -estimation procedure, via penalized
neighborhood estimation, to learn our family of CRFs from i.i.d. observations that simultaneously
addresses all three sub-tasks of CRF learning: feature selection (where we select a subset of the
covariates for any response variable), structure recovery (where we learn the graph structure among
the response variables), and parameter learning (where we learn the parameters specifying the CRF
distribution). We also present a single theorem that gives statistical guarantees saying that with highprobability, our M -estimator achieves each of these three sub-tasks. Our result can be viewed as an
extension of neighborhood selection results for MRFs [11, 12, 13]. Overall, this paper provides a
family of CRFs that generalizes many of the sub-classes in the existing literature and broadens the
utility and applicability of CRFs to model many other types of multivariate responses.
2
Conditional Graphical Models via Exponential Families
Suppose we have a p-variate random response vector Y = (Y1 , . . . , Yp ), with each response variable Ys taking values in a set Ys . Suppose we also have a set of covariates X = (X1 , . . . , Xq )
associated with this response vector Y . Suppose G = (V, E) is an undirected graph over p nodes
corresponding to the p response variables. Given the underlying graph G, and the set of cliques
(fully-connected sub-graphs) C of the graph G, the corresponding conditional random field (CRF)
is a set of distributions over the response conditioned on the covariates that satisfy Markov independence assumptions with respect to the graph G. Specifically, letting { c (Yc , X)}c2C denote a set
of clique-wise sufficient statistics, any strictly positive distribution of YPconditioned on X within
the conditional random field family takes the form: P (Y |X) / exp{ c2C c (Yc , X)}. With a
pair-wise conditional random field distribution, the set of cliques consists of the set of nodes V and
the set of edges E, so that
?X
X
P (Y |X) / exp
s (Ys , X) +
st (Ys , Yt , X) .
s2V
(s,t)2E
A key model specification question is how to select the class of sufficient statistics, . We have a
considerable understanding of how to specify univariate distributions over various types of variables
as well as on how to model their conditional response through regression. Consider the univariate
exponential family class of distributions: P (Z) = exp(? B(Z) + C(Z) D(?)), with sufficient
2
statistics B(Z), base measure C(Z), and log-normalization constant D(?). Such exponential family distributions include a wide variety of commonly used distributions such as Gaussian, Bernoulli,
multinomial, Poisson, exponential, gamma, chi-squared, beta, any of which can be instantiated with
particular choices of the functions B(?), and C(?). Such univariate exponential family distributions
are thus used to model a wide variety of data types including skewed continuous data and count data.
Additionally, through generalized linear models, they are used to model the response of various data
types conditional on a set of covariates. Here, we seek to use our understanding of univariate exponential families and generalized linear models to specify a conditional graphical model distribution.
Consider the conditional extension of the construction in [14, 9, 10]. Suppose that the nodeconditional distributions of response variables, Ys , conditioned on the rest of the response variables,
YV \s , and the covariates, X, is given by an univariate exponential family:
P (Ys |YV \s , X) = exp{Es (YV \s , X) Bs (Ys ) + Cs (Ys )
? s (YV \s , X)}.
D
(1)
Here, the functions Bs (?), Cs (?) are specified by the choice of the exponential family, and the parameter Es (YV \s , X) is an arbitrary function of the variables Yt in N (s) and the covariates X;
N (s) is the set of neighbors of node s according to an undirected graph G = (V, E). Would these
node-conditional distributions be consistent with a joint distribution? Would this joint distribution
factor according a conditional random field given by graph G? And would there be restrictions on
the form of the functions Es (YV \s , X)? The following theorem answers these questions. We note
that it generalizes the MRF framework of [9, 10] in two ways: it allows for the presence of conditional covariates, and moreover allows for heterogeneous types and domains of distributions with
the different choices of Bs (?) and Cs (?) at each individual node.
Theorem 1. Consider a p-dimensional random vector Y = (Y1 , Y2 , . . . , Yp ) denoting the set of
responses, and let X = (X1 , . . . , Xq ) be a q-dimensional covariate vector. Consider the following two assertions: (a) the node-conditional distributions of each P (Ys |YV \s , X) are specified by
univariate exponential family distributions as detailed in (1); and (b) the joint multivariate conditional distribution P (Y |X) factors according to the graph G = (V, E) with clique-set C, but with
factors over response-variable-cliques of size at most k. These assertions on the conditional and
joint distributions respectively are consistent if and only if the conditional distribution in (1) has the
tensor-factorized form:
?
?
X
P (Ys |YV \s , X; ?) = exp Bs (Ys ) ?s (X) +
?st (X) Bt (Yt ) + . . .
t2N (s)
+
X
?s t2 ...tk (X)
k
Y
j=2
t2 ,...,tk 2N (s)
?
Btj (Ytj ) + Cs (Ys )
? s (YV \s ) ,
D
(2)
where ?s? (X) := {?s (X), ?st (X), . . . , ?s t2 ...tk (X)} is a set of functions that depend only on the
covariates X. Moreover, the corresponding joint conditional random field distribution has the form:
P (Y |X; ?) = exp
?X
?s (X)Bs (Ys ) +
s
+ ... +
X X
?st (X) Bs (Ys )Bt (Yt )
s2V t2N (s)
X
?t1 ...tk (X)
k
Y
j=1
(t1 ,...,tk )2C
Btj (Ytj ) +
X
Cs (Ys )
A ?(X)
,
(3)
s
where A ?(X) is the log-normalization constant.
Theorem 1 specifies the form of the function Es (YV \s , X) defining the canonical parameter in the
univariate exponential family distribution (1). This function is a tensor factorization of products of
sufficient statistics of YV \s , and ?observation functions?, ?(X), of the covariates X alone. A key
point to note is that the observation functions, ?(X), in the CRF distribution (3) should ensure that
the density is normalizable, that is, A ?(X) < +1. We also note that we can allow different
exponential families for each of the node-conditional distributions of the response variables, meaning that the domains, Ys , or the sufficient statistics functions, Bs (?), can vary across the response
variables Ys . A common setting of these sufficient statistics functions however, for many popular
distributions (Gaussian, Bernoulli, etc.), is a linear function, so that Bs (Ys ) = Ys .
3
An important special case of the above result is when the joint CRF has response-variable-clique
factors of size at most two. The node conditional distributions (2) would then have the form:
?
?
?
X
P (Ys |YV \s , X; ?) / exp Bs (Ys ) ? ?s (X) +
?st (X) Bt (Yt ) + Cs (Ys ) ,
t2N (s)
while the joint distribution in (3) has the form:
P (Y |X; ?) = exp
?X
?s (X)Bs (Ys ) +
s2V
X
?st (X) Bs (Ys ) Bt (Yt ) +
X
Cs (Ys )
A ?(X)
, (4)
s2V
(s,t)2E
with the log-partition function, A ?(X) , given the covariates, X, defined as
A ?(X) := log
Z
exp
Yp
?X
X
?s (X)Bs (Ys ) +
s2V
?st (X) Bs (Ys ) Bt (Yt ) +
X
Cs (Ys ) .
(5)
s2V
(s,t)2E
Theorem 1 then addresses the model specification question of how to select the compatibility functions in CRFs for varied types of responses. Our framework permits arbitrary observation functions,
?(X), with the only stipulation that the log-partition function must be finite. (This only provides a
restriction when the domain of the response variables is not finite). In the next section, we address
the second model specification question of how to set the covariate functions.
2.1
Setting Covariate Functions
A candidate approach to specifying the observation functions, ?(X), in the CRF distribution above
would be to make distributional assumptions on X. Since Theorem 1 specifies the conditional
distribution P (Y |X), specifying the marginal distribution P (X) would allow us to specify the
joint distribution P (Y, X) without further restrictions on P (Y |X) using the simple product rule:
P (X, Y ) = P (Y |X) P (X). As an example, suppose that the covariates X follow an MRF distribution with graph G0 = (V 0 , E 0 ), and parameters #:
n X
o
X
P (X) = exp
#u u (Xu ) +
#uv uv (Xu , Xv ) A(#) .
u2V 0
(u,v)2V 0 ?V 0
Then, for any CRF distribution P (Y |X) in (4), we have
nX
X
X
X
P (X, Y ) = exp
#u u (Xu ) +
#uv uv (Xu , Xv ) +
?s (X)Ys +
?st (X)Ys Yt
u
+
X
(u,v)
Cs (Ys )
A(#)
A ?(X)
s
o
s
(s,t)
.
The joint distribution, P (X, Y ), is valid provided P (Y |X) and P (X) are valid distributions. Thus,
a distributional assumption on P (X) does not restrict the set of covariate functions in any way.
On the other hand, specifying the conditional distribution, P (X|Y ), naturally entails restrictions on
the form of P (Y |X). Consider the case where the conditional distributions P (Xu |XV 0 \u , Y ) are
also specified by univariate exponential families:
? u (XV 0 \u , Y )},
P (Xu |XV 0 \u , Y ) = exp{Eu (XV 0 \u , Y ) Bu (Xu ) + Cu (Xu ) D
(6)
? u (?) are
where Eu (XV 0 \u , Y ) is an arbitrary function of the rest of the variables, and Bu (?), Cu (?), D
specified by the univariate exponential family. Under these additional distributional assumptions in
(6), what form would the CRF distribution in Theorem 1 take? Specifically, what would be the form
of the observation functions ?(X)? The following theorem provides an answer to this question. (In
the following, we use the shorthand sm
1 to denote the sequence (s1 , . . . , sm ).)
Theorem 2. Consider the following assertions: (a) the conditional CRF distribution of the responses Y = (Y1 , . . . , Yp ) given covariates X = (X1 , . . . , Xq ) is given by the family (4); and (b)
the conditional distributions of individual covariates given rest of the variables P (Xu |XV 0 \u , Y ) is
given by an exponential family of the form in (6); and (c) the joint distribution P (X, Y ) belongs to
? = (V [ V 0 , E),
? with clique-set C, with factors of size at most k.
a graphical model with graph G
These assertions are consistent if and only if the CRF distribution takes the form:
4
P (Y |X) = exp
k
nX
X
?tr ,sl
l=1 tr 2V,sl r 2V 0
1
1
l r
(tr
)2C
1 ,s1
1
1
r
lY
r
Bsj (Xsj )
j=1
r
Y
Btj (Ytj ) +
j=1
X
t2V
Ct (Yt )
o
A(?, X) , (7)
so that the observation functions ?t1 ,...,tr (X) in the CRF distribution (4) are tensor products of
k
l r
X
X
Y
univariate functions: ?t1 ,...,tr (X) =
?tr ,sl r
Bsj (Xsj ).
1
l=1
sl1 r 2V 0
(tr1 ,sl1 r )2C
1
j=1
Let us examine the consequences of this theorem for the pair-wise CRF distributions (4). Theorem 2
then entails that the observation functions, {?s (X), ?st (X)}, have the following form when the
distribution has factors of size at most two:
X
?s (X) = ?s +
?su Bu (Xu ) , ?st (X) = ?st ,
(8)
u2V 0
for some constant parameters ?s , ?su and ?st . Similarly, if the joint distribution has factors of size
at most three, we have:
X
X
?s (X) = ?s +
?su Bu (Xu ) +
?suv Bu (Xu )Bv (Xv ),
u2V 0
?st (X) = ?st +
X
(u,v)2V 0 ?V 0
?stu Bu (Xu ).
(9)
u2V 0
(Remark 1) While we have derived the covariate functions in Theorem 2 by assuming a distributional form on X, using the resulting covariate functions do not necessarily impose distributional
assumptions on X. This is similar to ?generative-discriminative? pairs of models [15]: a ?generative? Naive Bayes distribution for P (X|Y ) corresponds to a ?discriminative? logistic regression
model for P (Y |X), but the converse need not hold. We can thus leverage the parametric CRF
distributional form in Theorem 2 without necessarily imposing stringent distributional assumptions on X.
(Remark 2) Consider the form of the covariate functions given by (8) compared to (9). What does
sparsity in the parameters entail in terms of conditional independence assumptions? ?st = 0
in (8) entails that Ys is conditionally independent of Yt given the other responses and all the
covariates. Thus, the parametrization in (8) corresponds to pair-wise conditional independence
assumptions between the responses (structure learning) and between the responses and covariates
(feature selection). In contrast, (9) lets the edges weights between the responses, ?st (X) vary
as a linear combination of the covariates. Letting ?stu = 0 entails the lack of a third-order
interaction between the pair of responses Ys and Yt and the covariate Xu , conditioned on all
other responses and covariates.
(Remark 3) Our general subclasses of CRFs specified by Theorems 1 and 2 encompass many existing CRF families as special cases, in addition to providing many novel forms of CRFs.
? The Gaussian CRF presented in [7] as well as the reparameterization in [8] can be viewed
as an instance of our framework by substituting in Gaussian sufficient statistics in (8): here
the Gaussian mean of the CRF depends on the covariates, but not the covariance. We can
correspondingly derive a novel Gaussian CRF formulation from (9), where the Gaussian
covariance of Y |X would also depend on X.
? By using the Bernoulli distribution as the node-conditional distribution, we can derive the
Ising CRF, recently studied in [2] with an application to studying tumor suppressor genes.
? Several novel forms of CRFs can be derived by specifying node-conditional distributions
as Poisson or exponential, for example. With certain distributions, such as the multivariate Poisson for example, we would have to enforce constraints on the parameters to ensure
normalizability of the distribution. For the Poisson CRF distribution, it can be verified that
for the log-partition function to be finite, A ?st (X) < 1, the observation functions are
constrained to be non-positive, ?st (X) ? 0. Such restrictions are typically needed for cases
where the variables have infinite domains.
5
3
Graphical Model Structure Learning
We now address the task of learning a CRF distribution from our general family given i.i.d. observations of the multivariate response vector and covariates. Structure recovery and estimation for
CRFs has not attracted as much attention as that for MRFs. Schmidt et al. [16], Torralba et al.
[17] empirically study greedy methods and block `1 regularized pseudo-likelihood respectively to
learn the discrete CRF graph structure. Bradley and Guestrin [18], Shahaf et al. [19] provide guarantees on structure recovery for low tree-width discrete CRFs using graph cuts, and a maximum
weight spanning tree based method respectively. Cai et al. [4], Liu et al. [6] provide structure recovery guarantees for their two-stage procedure for recovering (a reparameterization of) a conditional
Gaussian based CRF; and the semi-parameteric partition based Gaussian CRF respectively. Here,
we provide a single theorem that provides structure recovery guarantees for any CRF from our class
of exponential family CRFs, which encompasses not only Ising, and Gaussian based CRFs, but all
other instances within our class, such as Poisson CRFs, exponential CRFs, and so on.
We are given n i.i.d. samples Z := {X (i) , Y (i) }ni=1 from a pair-wise CRF distribution, of the form
specified by Theorems 1 and 2 with covariate functions as given in (8):
P (Y |X; ?? ) / exp
?X
?s? +
X
?
?su
Bu (Xu ) Bs (Ys ) +
u2N 0 (s)
s2V
X
(s,t)2E
X
?
?st
Bs (Ys ) Bt (Yt ) +
C(Ys ) , (10)
s
with unknown parameters, ?? . The task of CRF parameter learning corresponds to estimating the
parameters ?? , structure learning corresponds to recovering the edge-set E, and feature selection
corresponds to recovering the neighborhoods N 0 (s) in (10). Note that the log-partition function
A(?? ) is intractable to compute in general (other than special cases such as Gaussian CRFs). Accordingly, we adopt the node-based neighborhood estimation approach of [12, 13, 9, 10]. Given the
joint distribution in (10), the node-wise conditional distribution of Ys given the rest of the nodes
and covariates, is given by P (Ys |YV \s , X; ?? ) = exp{?P
? Bs (Ys ) + Cs (Ys ) P
Ds (?)} which is a
?
?
univariate exponential family, with parameter ? = ?s? + u2V 0 ?su
Bu (Xu ) + t2V \s ?st
Bt (Yt ),
as discussed in the previous section. The corresponding negative log-conditional-likelihood can be
Qn
(i)
(i)
written as `(?; Z) := n1 log i=1 P (Ys |YV \s , X (i) ; ?).
For each node s, we have three components of the parameter set, ? := (?s , ? x , ? y ): a scalar ?s , a
length q vector ? x := [u2V 0 ?su , and a length p 1 vector ? y := [t2V \s ?st . Then, given samples
Z, these parameters can be selected by the following `1 regularized M -estimator:
min
`(?) + x,n k? x k1 + y,n k? y k1 ,
(11)
?2R1+(p
1)+q
where x,n , y,n are the regularization constants. Note that x,n and y,n do not need to be the
same as y,n determines the degree of sparsity between Ys and YV \s , and similarly x,n does
the degree of sparsity between Ys and covariates X. Given this M -estimator, we can recover the
y
response-variable-neighborhood of response Ys as N (s) = {t 2 V \s | ?st
6= 0}, and the feature0
0
x
neighborhood of the response Ys as N (s) = {t 2 V | ?su 6= 0}.
Armed with this machinery, we can provide the statistical guarantees on successful learning of all
three sub-tasks of CRFs:
Theorem 3. Consider a CRF distribution as specified in (10). Suppose that the regularization
parameters in (11) are chosen such that
x,n
M1
r
log q
,
n
y,n
M1
r
log p
n
and
max{
x,n ,
y,n }
? M2 ,
where M1 and M2 are some constants depending on the node conditional
p in the form of
p distribution
?
exponential family. Further suppose that mint2N (s) |?st
| ?10
max
d
,
dy y,n where
x
x,n
min
?min is the minimum eigenvalue of the Hessian of the loss function at ? x? , ? y? , and dx , dy are the
number of nonzero elements in ? x? and ? y? , respectively. Then, for some positive constants L, c1 ,
c2 , and c3 , if n L(dx + dy )2 (log p + log q)(max{log n, log(p + q)})2 , then with probability at
least 1 c1 max{n, p + q} 2 exp( c2 n) exp( c3 n), the following statements hold.
b of the M-estimation problem in (11) is
(a) (Parameter Error) For each node s 2 V , the solution ?
unique with parameter error bound
p
p
cx ? x? k2 + k?
cy ? y? k2 ? 5 max
k?
dx x,n , dy y,n
?min
6
0.5
G?CRF
cGGM
pGGM
0
0
0.2
0.4
False Positive Rate
1
0.5
I?CRF
I?MRF
0
0
0.2
0.4
False Positive Rate
1
0.5
0
0
I?CRF
I?MRF
0.2
0.4
False Positive Rate
(b) Ising models
1
True Positive Rate
True Positive Rate
(a) Gaussian graphical models
True Positive Rate
G?CRF
cGGM
pGGM
0
0
0.2
0.4
False Positive Rate
1
True Positive Rate
0.5
True Positive Rate
True Positive Rate
1
0.5
P?CRF
P?MRF
0
0
0.2
0.4
False Positive Rate
1
0.5
0
0
P?CRF
P?MRF
0.2
0.4
False Positive Rate
(c) Poisson graphical models
Figure 1: (a) ROC curves averaged over 50 simulations from a Gaussian CRF with p = 50 responses,
q = 50 covariates, and (left) n = 100 and (right) n = 250 samples. Our method (G-CRF) is
compared to that of [7] (cGGM) and [8] (pGGM). (b) ROC curves for simulations from an Ising
CRF with p = 100 responses, q = 10 covariates, and (left) n = 50 and (right) n = 150 samples.
Our method (I-CRF) is compared to the unconditional Ising MRF (I-MRF). (c) ROC curves for
simulations from a Poisson CRF with p = 100 responses, q = 10 covariates, and (left) n = 50 and
(right) n = 150 samples. Our method (P-CRF) is compared to the Poisson MRF (P-MRF).
(b) (Structure Recovery) The M-estimate recovers the response-feature neighborhoods exactly, so
c0 (s) = N 0 (s), for all s 2 V .
that N
(c) (Feature Selection) The M-estimate recovers the true response neighborhoods exactly, so that
b (s) = N (s), for all s 2 V .
N
The proof requires modifying that of Theorem 1 in [9, 10] to allow for two different regularization
parameters, x,n and y,n , and for two distinct sets of random variables (responses and covariates).
This introduces subtleties related to interactions in the analyses. Extending our statistical analysis
in Theorem 3 for pair-wise CRFs to general CRF distributions (3) as well as general covariate
functions, such as in (9), are omitted for space reasons and left for future work.
4
Experiments
Simulation Studies. In order to evaluate the generality of our framework, we simulate data from
three different instances of our model: those given by Gaussian, Bernoulli (Ising), and Poisson
node-conditional distributions. We assume
P the true conditional distribution,
P P (Y |X), follows (7)
with the parameters: ?s (X) = ?s + u2V 0 ?su Xu , ?st (X) = ?st + u2V 0 ?stu Xu for some
constant parameters ?s , ?su , ?st and ?stu . In other words, we permit both the mean, ?s (X) and the
covariance or edge-weights, ?st (X), to depend on the covariates.
For the Gaussian CRFs, our goal is to infer the precision (or inverse covariance) matrix. We first
generate covariates as X ? U [ 0.05, 0.05]. Given X, the precision matrix of Y , ?(X), is generated
as
All the diagonal elements are set to 1. For each node s, 4 nearest neighbors in the
p follows.
p
p ? p lattice structure are selected, and ?st = 0 for non-neighboring nodes. For a given edge
structure, the strength is now a function of covariates, X, by letting ?st (X) = c + h!st , Xi where
c is a constant bias term and !st is target vector of length q. Data of size p = 50 responses and
q = 50 covariates was generated for n = 100 and n = 250 samples. Figure 1(a) reports the receiveroperator curves (ROC) averaged over 50 trials for three different methods: the model of [7] (denoted
as cGGM), the model of [8] (denoted as pGGM), and our method (denoted as G-CRF). Results show
that our method outperforms competing methods as their edge-weights are restricted to be constants,
while our method allows them to linearly depend on the covariates. Data was similarly generated
using a 4 nearest neighbor lattice structure for Ising and Poisson CRFs with p = 100 responses,
7
A
Figure 2: From left to right: Gaussian MRF, mean-specified Gaussian CRF, and the set corresponding to the covariance-specified Gaussian CRF. The latter shows the third-order interactions between
gene-pairs and each of the five common aberration covariates (EGFR, PTEN, CDKN2A, PDGFRA,
and CDK4). The models were learned from gene expression array data of Glioblastoma samples,
and the plots display the response neighborhoods of gene TWIST1.
q = 10 covariates, and n = 50 or n = 150 samples. Figure 1(b) and Figure 1(c) report the ROC
curves averaged over 50 trials for the Ising and Poisson CRFs respectively. The performance of our
method is compared to that of the unconditional Ising and Poisson MRFs of [9, 10].
Real Data Example: Genetic Networks of Glioblastoma. We demonstrate the performance of
our CRF models by learning genetic networks of Glioblastoma conditioned on common copy number aberrations. Level III gene expression data measured by Aglient arrays for n = 465 Glioblastoma tumor samples as well as copy number variation measured by CGH-arrays were downloaded
from the Cancer Genome Atlas data portal [20]. The data was processed according to standard
techniques, and we only consider genes from the C2 Pathway Database. The five most common
copy number aberrations across all subjects were taken as covariates. We fit our Gaussian ?meanspecified? CRFs (with covariate functions given in (8)) and Gaussian ?covariance-specified? CRFs
(with covariate functions given in (9)) by penalized neighborhood estimation to learn the graph
structure of gene expression responses, p = 876, conditional on q = 5 aberrations: EGFR, PTEN,
CDKN2A, PDGFRA, and CDK4. Stability selection [21] was used to determine the sparsity of the
network.
Due to space limitations, the entire network structures are not shown. Instead, we show the results of
the mean- and covariance-specified Gaussian CRFs and that of the Gaussian graphical model (GGM)
for one particularly important gene neighborhood: TWIST1 is a transcription factor for epithelial
to mesenchymal transition [22] and has been shown to promote tumor invasion in multiple cancers
including Glioblastoma [23]. The neighborhoods of TWIST1 learned by GGMs and mean-specified
CRFs share many of the known interactors of TWIST1, such as SNAI2, MGP, and PMAIP1 [24].
The mean-specified CRF is more sparse as conditioning on copy number aberrations may explain
many of the conditional dependencies with TWIST1 that are captured by GGMs, demonstrating the
utility of conditional modeling via CRFs. For the covariance-specified Gaussian CRF, we plot the
neighborhood given by ?stu in (9) for the five values of u corresponding to each aberration. The
results of this network denote third-order effects between gene-pairs and aberrations, and are thus
even more sparse with no neighbors for the interactions between TWIST1 and PTEN, CDK4, and
EGFR. TWIST1 has different interactions between PDGFRA and CDKN2A, which have high frequency for proneual subtypes of Glioblastoma tumors. Thus, our covariance-specified CRF network
may indicate that these two aberrations are the most salient in interacting with pairs of genes that include the gene TWIST1. Overall, our analysis has demonstrated the applied advantages of our CRF
models; namely, one can study the network structure between responses conditional on covariates
and/or between pairs of responses that interact with particular covariates.
Acknowledgments
The authors acknowledge support from the following sources: ARO via W911NF-12-1-0390 and
NSF via IIS-1149803 and DMS-1264033 to E.Y. and P.R; Ken Kennedy Institute for Information
Technology at Rice to G.A. and Z.L.; NSF DMS-1264058 and DMS-1209017 to G.A.; and NSF
DMS-1263932 to Z.L..
8
References
[1] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305, December 2008.
[2] J. Cheng, E. Levina, P. Wang, and J. Zhu.
arXiv:1209.6419, 2012.
Sparse ising models with covariates.
Arxiv preprint
[3] S. Ding, G. Wahba, and X. Zhu. Learning Higher-Order Graph Structure with Features by Structure
Penalty. In NIPS, 2011.
[4] T. Cai, H. Li, W. Liu, and J. Xie. Covariate adjusted precision matrix estimation with an application in
genetical genomics. Biometrika, 2011.
[5] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative trait
network. PLoS Genetics, 2009.
[6] H. Liu, X. Chen, J. Lafferty, and L. Wasserman. Graph-valued regression. In NIPS, 2010.
[7] J. Yin and H. Li. A sparse conditional gaussian graphical model for analysis of genetical genomics data.
Annals of Applied Statistics, 5(4):2630?2650, 2011.
[8] X. Yuan and T. Zhang. Partial gaussian graphical model estimation. Arxiv preprint arXiv:1209.6419,
2012.
[9] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur.
Info. Proc. Sys., 25, 2012.
[10] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family
distributions. Arxiv preprint arXiv:1301.4183, 2013.
[11] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS), 14, 2011.
[12] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals
of Statistics, 34:1436?1462, 2006.
[13] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using `1 regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010.
[14] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical
Society. Series B (Methodological), 36(2):192?236, 1974.
[15] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive bayes. In Neur. Info. Proc. Sys., 2002.
[16] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion
abnormality detection. In Computer Vision and Pattern Recognition (CVPR), pages 1?8, 2008.
[17] A. Torralba, K. P. Murphy, and W. T. Freeman. Contextual models for object detection using boosted
random fields. In NIPS, 2004.
[18] J. K. Bradley and C. Guestrin. Learning tree conditional random fields. In ICML, 2010.
[19] D. Shahaf, A. Chechetka, and C. Guestrin. Learning thin junction trees via graph cuts. In AISTATS, 2009.
[20] Cancer Genome Atlas Research Network. Comprehensive genomic characterization defines human
glioblastoma genes and core pathways. Nature, 455(7216):1061?1068, October 2008.
[21] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for high
dimensional graphical models. Arxiv preprint arXiv:1006.3316, 2010.
[22] J. Yang, S. A. Mani, J. L. Donaher, S. Ramaswamy, R. A. Itzykson, C. Come, P. Savagner, I. Gitelman,
A. Richardson, and R. A. Weinberg. Twist, a master regulator of morphogenesis, plays an essential role
in tumor metastasis. Cell, 117(7):927?939, 2004.
[23] S. A. Mikheeva, A. M. Mikheev, A. Petit, R. Beyer, R. G. Oxford, L. Khorasani, J.-P. Maxwell, C. A.
Glackin, H. Wakimoto, I. Gonz?alez-Herrero, et al. Twist1 promotes invasion through mesenchymal
change in human glioblastoma. Mol Cancer, 9:194, 2010.
[24] M. A. Smit, T. R. Geiger, J.-Y. Song, I. Gitelman, and D. S. Peeper. A twist-snail axis critical for trkbinduced epithelial-mesenchymal transition-like transformation, anoikis resistance, and metastasis. Molecular and cellular biology, 29(13):3722?3737, 2009.
[25] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183?2202, May 2009.
[26] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Arxiv preprint arXiv:1010.2731, 2010.
9
| 5154 |@word trial:2 cu:2 version:1 c0:1 tensorial:1 seek:2 simulation:4 covariance:9 tr:6 t2n:3 liu:7 series:1 egfr:3 denoting:1 genetic:2 outperforms:1 existing:2 bradley:2 contextual:1 bsj:2 dx:3 must:1 attracted:1 written:1 partition:5 plot:2 atlas:2 v:1 alone:1 generative:3 greedy:1 selected:2 accordingly:1 parametrization:1 sys:2 core:1 provides:4 characterization:1 node:22 zhang:1 five:3 chechetka:1 c2:3 beta:1 yuan:1 consists:1 shorthand:1 combine:1 pathway:2 introduce:1 inter:1 examine:1 chi:1 freeman:1 armed:1 provided:1 estimating:1 underlying:4 moreover:2 factorized:1 what:3 unified:3 transformation:1 pediatrics:1 guarantee:6 pseudo:1 quantitative:1 alez:1 subclass:4 exactly:2 biometrika:1 k2:2 classifier:1 ly:1 converse:1 positive:15 t1:4 engineering:1 xv:9 consequence:1 pdgfra:3 oxford:1 studied:2 meinshausen:1 specifying:7 snail:1 factorization:1 averaged:3 unique:1 acknowledgment:1 parameteric:1 block:1 glioblastoma:8 procedure:3 word:1 selection:9 restriction:5 demonstrated:1 yt:13 crfs:40 petit:1 pggm:4 attention:1 formulate:1 decomposable:1 recovery:8 wasserman:2 m2:2 estimator:5 rule:1 array:3 reparameterization:2 stability:2 traditionally:1 variation:1 annals:3 construction:1 suppose:7 target:1 play:1 exact:1 programming:1 element:2 trend:1 recognition:1 particularly:1 cut:2 ising:13 distributional:7 database:1 role:1 preprint:5 ding:1 electrical:1 wang:1 cy:1 pradeepr:1 connected:1 eu:2 plo:1 glackin:1 covariates:46 depend:4 joint:14 various:2 instantiated:1 distinct:1 broadens:2 neighborhood:13 widely:2 valued:3 cvpr:1 statistic:14 richardson:1 noisy:1 advantage:2 sequence:1 eigenvalue:1 cai:2 aro:1 interaction:8 product:6 neighboring:1 flexibility:1 r1:1 extending:1 tk:5 object:1 derive:3 develop:2 depending:1 measured:2 nearest:2 strong:1 recovering:3 c:12 zhandong:1 indicate:1 rosales:1 come:1 modifying:1 mesenchymal:3 human:2 stringent:1 khorasani:1 gallen:1 subtypes:1 extension:4 strictly:1 adjusted:1 hold:2 exp:17 substituting:1 achieves:1 vary:2 torralba:2 adopt:1 omitted:1 estimation:9 proc:2 epithelial:2 uhlmann:1 utexas:2 genomic:2 gaussian:29 normalizability:1 beyer:1 shelf:1 boosted:1 derived:3 methodological:1 sequencing:1 bernoulli:4 likelihood:2 contrast:1 besag:1 kim:1 inference:1 roeder:1 mrfs:3 bt:7 typically:1 entire:1 compatibility:7 overall:2 among:2 denoted:3 constrained:2 special:3 spatial:1 marginal:1 field:11 ng:1 biology:1 broad:1 yu:1 icml:1 throughput:1 thin:1 promote:1 future:1 t2:3 report:2 sanghavi:1 metastasis:2 simultaneously:1 gamma:1 comprehensive:1 individual:2 sparsistency:1 murphy:2 stu:5 n1:1 detection:2 introduces:1 pradeep:1 unconditional:2 regularizers:1 edge:6 partial:1 machinery:1 tree:4 instance:5 modeling:2 assertion:4 w911nf:1 lattice:3 applicability:1 subset:2 successful:1 characterize:1 cggm:4 dependency:1 answer:3 st:31 density:1 u2n:1 negahban:1 bu:8 off:1 squared:1 tr1:1 conf:1 yp:4 li:2 star:1 twin:1 satisfy:1 depends:2 invasion:2 ramaswamy:1 yv:15 bayes:2 recover:1 xing:1 ni:1 ggm:1 zhandonl:1 kennedy:1 explain:1 frequency:1 dm:4 naturally:1 associated:1 proof:1 recovers:2 popular:3 nodeconditional:1 maxwell:1 higher:2 xie:1 follow:2 response:60 specify:5 formulation:1 generality:1 stage:1 d:1 hand:2 shahaf:2 su:9 lack:1 defines:1 logistic:3 effect:1 y2:1 true:8 counterpart:1 mani:1 regularization:5 nonzero:1 conditionally:1 skewed:3 width:1 generalized:3 crf:54 demonstrate:1 allen:3 motion:1 btj:3 meaning:1 wise:10 pten:3 novel:6 recently:2 variational:1 common:5 multinomial:1 empirically:1 twist:2 conditioning:1 discussed:1 association:1 m1:3 trait:1 imposing:2 ai:1 uv:4 similarly:3 language:1 specification:5 entail:5 etc:1 base:1 multivariate:10 belongs:1 gonz:1 certain:1 discretizing:1 binary:3 seen:1 guestrin:3 additional:1 minimum:1 impose:1 captured:1 determine:1 semi:1 ii:1 encompass:2 multiple:1 infer:1 levina:1 ravikumar:6 y:45 promotes:1 molecular:1 prediction:2 mrf:12 regression:5 heterogeneous:1 vision:2 poisson:13 arxiv:10 represent:1 sometimes:1 normalization:2 aberration:8 achieved:1 cell:1 c1:2 addition:1 u2v:8 addressed:1 source:1 rest:5 subject:1 undirected:5 december:1 lafferty:2 jordan:2 yang:4 presence:1 leverage:1 iii:1 abnormality:1 variety:3 independence:4 variate:1 xsj:2 fit:1 mgp:1 restrict:1 competing:1 wahba:1 lasso:2 idea:1 gitelman:2 texas:2 c2c:2 expression:3 itzykson:1 utility:2 penalty:1 song:1 resistance:1 hessian:1 remark:3 generally:1 detailed:1 tune:1 processed:1 ken:1 generate:1 specifies:2 sl:3 canonical:1 nsf:3 arising:1 ytj:3 discrete:5 key:5 salient:1 demonstrating:1 threshold:1 groupsparse:1 verified:1 graph:22 genetical:2 inverse:1 master:1 family:33 saying:1 geiger:1 dy:4 bound:1 ct:1 display:1 cheng:1 quadratic:1 stipulation:1 bv:1 strength:1 precisely:1 constraint:1 normalizable:1 encodes:1 regulator:1 simulate:1 min:4 s2v:7 department:4 structured:1 according:4 neur:2 fung:1 combination:2 across:3 increasingly:1 b:16 s1:2 restricted:1 taken:1 heart:1 turn:1 count:4 needed:1 letting:4 tractable:2 studying:1 generalizes:2 junction:1 permit:2 appropriate:1 enforce:1 schmidt:2 binomial:1 spurred:1 include:2 ensure:2 graphical:17 medicine:1 k1:2 society:1 tensor:3 g0:1 question:8 parametric:1 diagonal:1 jalali:1 suv:1 nx:2 sl1:2 cellular:1 spanning:1 reason:1 assuming:1 length:3 modeled:1 eunho:2 providing:1 baylor:1 october:1 weinberg:1 statement:1 info:2 negative:2 unknown:1 observation:10 markov:1 sm:2 finite:3 acknowledge:1 defining:1 extended:1 y1:3 interacting:1 varied:3 arbitrary:3 sharp:1 morphogenesis:1 pair:11 namely:1 specified:17 c3:2 bcm:1 learned:2 nip:3 trans:1 address:6 pattern:1 yc:2 sparsity:5 encompasses:1 including:4 max:5 royal:1 wainwright:4 critical:1 natural:2 regularized:3 indicator:2 zhu:2 technology:1 axis:1 categorical:1 naive:2 genomics:2 xq:3 literature:1 understanding:2 determining:1 fully:1 loss:1 limitation:1 foundation:1 downloaded:1 degree:2 vasuki:1 sufficient:9 consistent:4 share:1 austin:2 cancer:4 genetics:1 penalized:3 copy:4 bias:1 allow:3 highprobability:1 institute:1 wide:2 neighbor:4 taking:1 correspondingly:1 definitively:1 sparse:4 curve:5 valid:2 transition:2 genome:3 qn:1 cgh:1 author:1 ticular:1 commonly:1 transcription:1 gene:12 clique:10 suppressor:1 discriminative:5 neurology:1 xi:1 continuous:4 additionally:2 learn:6 nature:1 mol:1 interact:1 necessarily:3 domain:5 aistats:2 linearly:1 big:1 x1:3 xu:18 crafted:1 roc:5 precision:3 sub:7 exponential:29 candidate:1 third:3 theorem:19 specific:2 covariate:13 showing:1 exists:1 intractable:1 essential:1 false:6 smit:1 portal:1 conditioned:9 chen:1 suited:1 cx:1 yin:1 univariate:17 scalar:1 subtlety:1 corresponds:5 determines:1 rice:3 conditional:49 viewed:3 goal:1 considerable:1 change:1 genevera:1 specifically:2 infinite:1 tumor:5 called:1 e:4 select:3 college:1 ggms:2 support:1 latter:1 arises:1 bioinformatics:2 evaluate:1 correlated:1 |
4,592 | 5,155 | Scalable kernels for graphs with continuous attributes
Aasa Feragen, Niklas Kasenburg
Machine Learning and Computational Biology Group
Max Planck Institutes T?ubingen and DIKU, University of Copenhagen
{aasa,niklas.kasenburg}@diku.dk
Jens Petersen1 ,
Marleen de Bruijne1,2
1
DIKU, University of Copenhagen
2
Erasmus Medical Center Rotterdam
{phup,marleen}@diku.dk
Karsten Borgwardt
Machine Learning and Computational Biology Group
Max Planck Institutes T?ubingen
Eberhard Karls Universit?at T?ubingen
[email protected]
Abstract
While graphs with continuous node attributes arise in many applications, stateof-the-art graph kernels for comparing continuous-attributed graphs suffer from
a high runtime complexity. For instance, the popular shortest path kernel scales
as O(n4 ), where n is the number of nodes. In this paper, we present a class of
graph kernels with computational complexity O(n2 (m + log n + ? 2 + d)), where
? is the graph diameter, m is the number of edges, and d is the dimension of
the node attributes. Due to the sparsity and small diameter of real-world graphs,
these kernels typically scale comfortably to large graphs. In our experiments,
the presented kernels outperform state-of-the-art kernels in terms of speed and
accuracy on classification benchmark datasets.
1
Introduction
Graph-structured data appears in many application domains of machine learning, reaching from
Social Network Analysis to Computational Biology. Comparing graphs to each other is a fundamental problem in learning on graphs, and graph kernels have become an efficient and widely-used
method for measuring similarity between graphs. Highly scalable graph kernels have been proposed
for graphs with thousands and millions of nodes, both for graphs without node labels [1] and for
graphs with discrete node labels [2]. Such graphs appear naturally in applications such as natural
language processing, chemoinformatics and bioinformatics. For applications in medical image analysis, computer vision or even bioinformatics, however, continuous-valued physical measurements
such as shape, relative position or other measured node properties are often important features for
classification. An open challenge, which is receiving increased attention, is to develop a scalable
kernel on graphs with continuous-valued node attributes.
We present the GraphHopper kernel between graphs with real-valued edge lengths and any type of
node attribute, including vectors. This kernel is a convolution kernel counting sub-path similarities.
The computational complexity of this kernel is O(n2 (m + log n + ? 2 + d)), where n and m are the
number of nodes and edges, respectively; ? is the graph diameter; and d is the dimension of the node
attributes. Although ? = n or m = n2 in the worst case, this is rarely the case in real-world graphs,
as is also illustrated by our experiments. We find empirically in Section 3.1 that our GraphHopper
kernel tends to scale quadratically with the number of nodes on real data.
1
1.1
Related work
Many popular kernels for structured data are sums of substructure kernels:
X X
k(G, G0 ) =
ksub (s, s0 ).
s?S s0 ?S 0
Here G and G are structured data objects such as strings, trees and graphs with classes S and S 0
of substructures, and ksub is a substructure kernel. Such k are instances of R-convolution kernels [3].
0
A large variety of kernels exist for structures such as strings [4, 5], finite state transducers [6] and
trees [5, 7]. For graphs in general, kernels can be sorted into categories based on the types of
attributes they can handle. The graphlet kernel [1] compares unlabeled graphs, whereas several
kernels allow node labels from a finite alphabet [2, 8]. While most kernels have a runtime that
is at least O(n3 ), the Weisfeiler-Lehman kernel [2] uses efficient sorting, hashing and counting
algorithms that take advantage of repeated occurrences of node labels from the finite label alphabet,
and achieves a runtime which is at most quadratic in the number of nodes. Unfortunately, this does
not generalize to graphs with vector-valued node attributes, which are typically all distinct samples
from an infinite alphabet.
The first kernel to take advantage of non-discrete node labels was the random walk kernel [9?11].
It incorporates edge probabilities and geometric node attributes [12], but suffers from tottering [13]
and is empirically slow. Kriege et al. [14] adopt the idea of comparing matched subgraphs, including vector-valued attributes on nodes and edges. However, this kernel has a high computational
and memory cost, as we will see in Section 3. Other kernels handling non-discrete attributes use
edit-distance and subtree enumeration [15]. While none of these kernels scale well to large graphs,
the propagation kernel [16] is fast asymptotically and empirically. It translates the problem of
continuous-valued attributes to a problem of discrete-valued labels by hashing node attributes. Nevertheless, its performance depends strongly on the hashing function and in our experiments it is
outperformed in classification accuracy by kernels which do not discretize the attributes.
In problems where continuous-valued node attributes and inter-node distance dG (v, w) along the
graph G are important features, the shortest path kernel [17], defined as
X
X
kSP (G, G0 ) =
kn (v, v 0 ) ? kl (dG (v, w), dG0 (v 0 , w0 )) ? kn (w, w0 ),
v,w?V v 0 ,w0 ?V 0
performs well in classification. In particular, kSP allows the user to choose any kernels kn and kl on
nodes and shortest path length. However, the asymptotic runtime of kSP is generally O(n4 ), which
makes it unfeasible for many real-world applications.
1.2
Our contribution
In this paper we present a kernel which also compares shortest paths between node pairs from the two
graphs, but with a different path kernel. Instead of comparing paths via products of kernels on their
lengths and endpoints, we compare paths through kernels on the nodes encountered while ?hopping?
along shortest paths. This particular path kernel allows us to decompose the graph kernel as a
weighted sum of node kernels, initially suggesting a potential runtime as low as O(n2 d). The graph
structure is encoded in the node kernel weights, and the main algorithmic challenge becomes to
efficiently compute these weights. This is a combinatorial problem, which we solve with complexity
O(n2 (m + log n + ? 2 )). Note, moreover, that the GraphHopper kernel is parameter-free except for
the choice of node kernels.
The paper is organized as follows. In Section 2 we give short formal definitions and proceed to
defining our kernel and investigating its computational properties. Section 3 presents experimental
classification results on different datasets in comparison to state-of-the-art kernels as well as empirical runtime studies, before we conclude with a discussion of our findings in Section 4.
2
Graphs, paths and GraphHoppers
We shall compare undirected graphs G = (V, E) with edge lengths l : E ? R+ and node attributes
A : V ? X from a set X, which can be any set with a kernel kn ; in our data X = Rd . Denote
2
n = |V | and m = |E|. A subtree T ? G is a subgraph of G which is a tree. Such subtrees inherit
node attributes and edge lengths from G by restricting the attribute and length maps A and l to the
new node and edge sets, respectively. For a tree T = (V, E, r) with a root node r, let p(v) and c(v)
denote the parent and the children of any v ? V .
Given nodes va , vb ? V , a path ? from va to vb in G is defined as a sequence of nodes
? = [v1 , v2 , v3 , . . . , vn ] ,
where v1 = va , vn = vb and [vi , vi+1 ] ? E for all i = 1, . . . , n ? 1. Let ?(i) = vi denote the ith
node encountered when ?hopping? along the path. Given paths ? and ? 0 from v to w and from w to
u, respectively, let [?, ? 0 ] denote their composition, which is a path from v to u. Denote by l(?) the
weighted length of ?, given by the sum of lengths l(vi , vi+1 ) of edges traversed along the path, and
denote by |?| the discrete length of ?, defined as the number of nodes in ?. The shortest path ?ab
from va to vb is defined in terms of weighted length; if no edge length function is given, set l(e) = 1
for all e ? E as default. The diameter ?(G) of G is the maximal number of nodes in a shortest path
in G, with respect to weighted path length.
In the next few lemmas we shall prove that for a fixed a source node v ? V , the directed edges along
shortest paths from v to other nodes of G form a well-defined directed acyclic graph (DAG), that is,
a directed graph with no cycles.
First of all, subpaths of shortest paths ?vw with source node v are shortest paths as well:
Lemma 1. [18, Lemma 24.1] If ?1n = [v1 , . . . , vn ] is a shortest path from v1 = v to vn , then the
path ?1n (1 : i) consisting of the first i nodes of ?1n is a shortest path from v1 = v to vi .
Given a source node v ? G, construct the directed graph Gv = (Vv , Ev ) consisting of all nodes Vv
from the connected component of v in G and the set Ev of all directed edges found in any shortest
path from v to any given node w in Gv . Any directed walk from v in Gv is a shortest path in G:
Lemma 2 If ?1n is a shortest path from v1 = v to vn and (vn , vn+1 ) ? Ev , then [?1n , [vn , vn+1 ]]
is a shortest path from v1 = v to vn+1 .
Proof. Since (vn , vn+1 ) ? Ev , there is a shortest path ?1(n+1) = [v1 , . . . , vn , vn+1 ] from v1 = v to
vn+1 . If this path is shorter than [?1n , [vn , vn+1 ]], then ?1(n+1) (1 : n) is a shortest path from v1 = v
to vn by Lemma 1, and it must be shorter than ?1n . This is impossible, since ?1n is a shortest path.
Proposition 3 The shortest path graph Gv is a DAG.
Proof. Assume, on the contrary, that Gv contains a cycle c = [v1 , . . . , vn ] where (vi , vi+1 ) ? Ev
for each i = 1, . . . , n ? 1 and v1 = vn . Let ?v1 be the shortest path from v to v1 . Using Lemma 2
repeatedly, we see that the path [?v1 , c] is a shortest path from v to vn = v1 , which is impossible
since the new path must be longer than the shortest path ?v1 .
2.1
The GraphHopper kernel
We define the GraphHopper kernel as a sum of path kernels kp over the families P, P 0 of shortest
paths in G, G0 :
X
k(G, G0 ) =
kp (?, ? 0 ),
??P,? 0 ?P 0
0
In this paper, the path kernel kp (?, ? ) is a sum of node kernels kn on nodes simultaneously encountered while simultaneously hopping along paths ? and ? 0 of equal discrete length, that is:
P|?|
0
0
j=1 kn (?(j), ? (j)) if |?| = |? |,
kp (?, ? 0 ) =
(4)
0
otherwise.
It is clear from the definition that k(G, G0 ) decomposes as a sum of node kernels:
X X
k(G, G0 ) =
w(v, v 0 )kn (v, v 0 ),
(5)
v?V v 0 ?V 0
where w(v, v 0 ) counts the number of times v and v 0 appear at the same hop, or coordinate, i of
shortest paths ?, ? 0 of equal discrete length |?| = |? 0 |. We can decompose the weight w(v, v 0 ) as
w(v, v 0 ) =
? X
?
X
] {(?, ? 0 )|?(i) = v, ? 0 (i) = v 0 , |?| = |? 0 | = j} = hM (v), M (v 0 )i,
j=1 i=1
3
1
3
1
1
2
1
2
1
2
2
3
1
1
2
1
2
2
3
1
2
2
2
2
1
1
+01
+01
+0122
+01
+001
011
+012
1
+001
+001
12
+01
+0011
+0011
0021
+01
122
01
+001
011
1362
1122
+01
+0142
142
+01
1
+012
+012
+01
+01
+01
1
0021
12
1
+01 +01
1
1
Figure 1: Top: Expansion from the graph G, to the DAG Gv? , to a larger tree Sv? . Bottom left:
Recursive computation of the ovv? . Bottom middle and right: Recursive computation of the dvr in a
rooted tree as in Algorithm 2, and of the dvv? on a DAG Gv? as in Algorithm 3.
where M (v) is a ? ? ? matrix whose entry [M (v)]ij counts how many times v appears at the ith
coordinate of a shortest path in G of discrete length j, and ? = max{?(G), ?(G0 )}. More precisely,
= number
of times v appears as the ith node on a shortest path of discrete length j
P
= v??V number of times v appears as ith node on a shortest path from v?
Pof discrete length j
= v??V Dv? (v, j ? i + 1)Ov? (v, i).
(6)
Here Dv? is a n ? ? matrix whose (v, i)-coordinate counts the number of directed walks with i nodes
starting at v in the shortest path DAG Gv? . The Ov? is a n ? ? matrix whose (v, i)-coordinate counts
the number of directed walks from v? to v in Gv? with i nodes. Given the matrices Dv? and Ov? , we
compute all M (v) by looping through all choices of source node v? ? V , adding up the contributions
Mv? to M (v) from each v?, as detailed in Algorithm 4.
[M (v)]ij
The v th row of Ov? , denoted ovv? , is computed recursively by message-passing from the root, as
detailed in Figure 1 and Algorithm 1. Here, Vv?j consists of the nodes v ? V for which the shortest
paths ?v?v of highest discrete length have j nodes. Algorithm 1 sends one message of size at most ?
per edge, thus has complexity O(m?).
To compute the v th row of Dv? , denoted dvv? , we draw inspiration from [19] where the vectors dvv? are
computed easily for trees using a message-passing algorithm as follows. Let T = (V, E, r) be a tree
with a designated root node r. The ith coefficient of dvr counts the number of paths from v in T of
discrete length i, directed from the root. This is just the number of descendants of v at level i below
v in T . Let ? denote left aligned addition of vectors of possibly different length, e.g.
[a, b, c] ? [d, e] = [(a + d), (b + e), c].
Using ?, the dvr can be expressed recursively:
M
dvr = [1]
[0, dw
r ].
p(w)=v
Algorithm 1 Message-passing algorithm for computing ovv? for all v, on Gv?
1: Initialize: ovv?? = [1]; ovv? = [0] ? v ? V \ {?
v }.
2: for j = 1 . . . ? do
3:
for v ? Vv?j do
4:
for (v, w) ? Ev? do
w
v
5:
ow
v
? = ov
? ? [0, ov
?]
6:
end for
7:
end for
8: end for
4
(7)
Algorithm 2 Recursive computation of dvr for all v on T = (V, E, r).
1: Initialize: dvr = [1] ? v ? V .
2: for e = (v, c(v)) ? E do
c(v)
3:
dvr = dvr ? [0, dr ]
4: end for
Algorithm 3 Recursive computation of dvv? for all v on Gv?
1: Initialize: dvv? = [1] ? v ? V .
2: for e = (v, c(v)) ? EG do
c(v)
3:
dvv? = dvv? ? [0, dv? ]
4: end for
The dvr for all v ? V are computed recursively, sending counters along the edges from the leaf nodes
towards the root, recording the number of descendants of any node at any level, see Algorithm 2 and
Figure 1. The dvr for all v ? V are computed in O(nh) time, where h is tree height, since each edge
passes exactly one message of size ? h.
On a DAG, computing dvv? is a little more complex. Note that the DAG Gv? generated by all shortest
paths from v? ? V can be expanded into a rooted tree Sv? by duplicating any node with several
incoming edges, see Figure 1. The tree Sv? contains, as a path from the root v? to one of the nodes
labeled v in Sv? , any shortest path from v? to v in G. However, the number of nodes in Sv? could, in
theory, be exponential in n, making computation of dvv? by message-passing on Sv? intractable. Thus,
we shall compute
the dvv? on the DAG Gv? rather than on Sv? . As on trees, the dvv? in Sv? are given by
L
dvv? = [1] ? (w,v)?Ev? [0, dw
v
? ], where ? is defined in (7). This observation leads to an algorithm in
which each edge e ? Ev? passes exactly one vector of size ? ? + 1 in the direction of the root v?,
starting at the leaves of the DAG Gv? and computing updated descendant vectors for each receiving
node. See Algorithm 3 and Figure 1. The complexity of Algorithm 3, which computes dvv? for all
v ? V , is O(|Ev? |?) ? O(m?).
2.2
Computational complexity analysis
Given the w(v, v 0 ) and the kn (v, v 0 ) for all v ? V and v 0 ? V 0 , the kernel can be computed in
O(n2 ) time. If we assume that each node kernel kn (v, v 0 ) can be computed in O(d) time (as is the
case with many standard kernels including Gaussian and linear kernels), then all kn (v, v 0 ) can be
precomputed in O(n2 d) time. Given the matrices M (v) and M (v 0 ) for all v ? V , v 0 ? V 0 , each
w(v, v 0 ) requires O(? 2 ) time, giving O(n2 ? 2 ) complexity for computing all weights w(v, v 0 ).
Note that Algorithm 4 computes M (v) for all v ? G simultaneously. Adding the time complexities
of the lines in each iteration of the algorithm as given on the right hand side of the individual lines
in Algorithm 4, the total complexity of one iteration of Algorithm 4 is
O (mn + n log n) + m? + m? + n? 2 + n? 2 = O(n(m + log n + ? 2 )),
Algorithm 4 Algorithm simultaneously computing all M (v)
1: Initialize: M (v) = 0 ? R??? for each v ? V .
2: for all v? ? V do
3:
compute shortest path DAG Gv? rooted at v? using Dijkstra
4:
compute Dv? (v) for each v ? V
5:
compute Ov? (v) for each v ? V
6:
for each v ? V , compute the ? ? ? matrix Mv? (v) given by
[Mv? (v)]ij
=
Dv? (v, j ? i + 1)Ov? (v, i) when i ? j
0
otherwise,
7:
update M (v) = M (v) + Mv? (v) for each v ? V
8: end for
5
(O(mn + n log n))
(O(m?))
(O(m?))
(O(n? 2 ))
(O(n? 2 ))
giving total complexity O(n2 (m+log n+? 2 )) for computing M (v) for all v ? V using Algorithm 4.
It follows that the total complexity of computing k(G, G0 ) is
O(n2 + n2 d + n2 ? 2 + n2 ? 2 + n2 (m + log n + ? 2 )) = O(n2 (m + log n + d + ? 2 )).
When computing the kernel matrix Kij = k(Gi , Gj ) for a set {Gi }N
i=1 of graphs with N > m +
n + ? 2 , note that Algorithm 4 only needs to be run once for every graph Gi . Thus, the average
complexity of computing one kernel value out of all Kij becomes
1
N O(n2 (m + log n + ? 2 )) + N 2 O(n2 + n2 d + ? 2 ) ? O(n2 d).
N2
3
Experiments
Classification experiments were made with the proposed GraphHopper kernel and several alternatives: The propagation kernel PROP [16], the connected subgraph matching kernel CSM [14] and
the shortest path kernel SP [17] all use continuous-valued attributes. In addition, we benchmark
against the Weisfeiler-Lehman kernel WL [2], which only uses discrete node attributes. All kernels were implemented in Matlab, except for CSM, where a Java implementation was supplied by
N. Kriege. For the WL kernel, the Matlab implementation available from [20] was used. For the
GraphHopper and SP kernels, shortest paths were computed using the BGL package [21] implemented in C++. The PROP kernel was implemented in two different versions, both using the total
variation hash function, as the Hellinger distance is only directly applicable to positive vector-valued
attributes. For PROP-diff, labels were propagated with the diffusion scheme, whereas in PROP-WL
labels were first discretised via hashing and then the WL kernel [2] update was used. The bin width
of the hash function was set to 10?5 as suggested in [16]. The PROP-diff, PROP-WL and the WL
kernel were each run with 10 iterations. In the CSM kernel, the clique size parameter was set to
k = 5. Our kernel implementations and datasets (with the exception of AIRWAYS) can be found at
http://image.diku.dk/aasa/software.php.
Classification experiments were made on four datasets: ENZYMES, PROTEINS, AIRWAYS and
SYNTHETIC. ENZYMES and PROTEINS are sets of proteins from the BRENDA database [22]
and the dataset of Dobson and Doig [23], respectively. Proteins are represented by graphs as follows.
Nodes represent secondary structure elements (SSEs), which are connected whenever they are neighbors either in the amino acid sequence or in 3D space [24]. Each node has a discrete type attribute
(helix, sheet or turn) and an attribute vector containing physical and chemical measurements includ?
? distance between the C? atom of its first and last residue
ing length of the SSE in Angstr?m
(A),
? its hydrophobicity, van der Waals volume, polarity and polarizability. ENZYMES comes with
in A,
the task of classifying the enzymes to one out of 6 EC top-level classes, whereas PROTEINS comes
with the task of classifying into enzymes and non-enzymes. AIRWAYS is a set of airway trees extracted from CT scans of human lungs [25, 26]. Each node represents an airway branch, attributed
with its length. Edges represent adjacencies between airway bronchi. AIRWAYS comes with the
task of classifying airways into healthy individuals and patients suffering from Chronic Obstructive
Pulmonary Disease (COPD). SYNTHETIC is a set of synthetic graphs based on a random graph G
with 100 nodes and 196 edges, whose nodes are endowed with normally distributed scalar attributes
sampled from N (0, 1). Two classes A and B each with 150 attributed graphs were generated from
G by randomly rewiring edges and permuting node attributes. Each graph in A was generated by
rewiring 5 edges and permuting 10 node attributes, and each graph in B was generated by rewiring
10 edges and permuting 5 node attributes, after which noise from N (0, 0.452 ) was added to every
node attribute in every graph. Detailed metrics of the datasets are found in Table 1.
Both GraphHopper, SP and CSM depend on freely selected node kernels for continuous attributes,
giving modeling flexibility. For the ENZYMES, AIRWAYS and SYNTHETIC datasets, a Gaussian
0
2
node kernel kn (v, v 0 ) = e??kA(v)?A(v )k was used on the continuous-valued attribute, with ? =
1/d. For the PROTEINS dataset, the node kernel was a product of a Gaussian kernel with ? = 1/d
and a Dirac kernel on the continuous- and discrete-valued node attributes, respectively. For the WL
kernel, discrete node labels were used when available (in ENZYMES and PROTEINS); otherwise
node degree was used as node label.
Classification was done using a support vector machine (SVM) [27]. The SVM slack parameter was
trained using nested cross validation on 90% of the entire dataset, and the classifier was tested on the
6
Number of nodes
Number of edges
Graph diameter
Node attribute dimension
Dataset size
Class size
ENZYMES
32.6
46.7
12.8
18
600
6 ? 100
PROTEINS
39.1
72.8
11.6
1
1113
663/450
AIRWAYS
221
220
21.1
1
1966
980/986
SYNTHETIC
100
196
7
1
300
150/150
Table 1: Data statistics: Average node and edge counts and graph diameter, dataset and class sizes.
Kernel
GraphHopper
PROP-diff [16]
PROP-WL [16]
SP [17]
CSM [14]
WL [2]
ENZYMES
69.6 ? 1.3 (120 1000 )
37.2 ? 2.2 (1300 )
48.5 ? 1.3 (10 900 )
71.0 ? 1.3 (3 d)
69.4 ? 0.8
48.0 ? 0.9 (1800 )
PROTEINS
74.1 ? 0.5 (2.8 h)
73.3 ? 0.4 (2600 )
73.1 ? 0.8 (20 4000 )
75.5 ? 0.8 (7.7 d)
OUT OF MEMORY
75.6 ? 0.5 (20 5100 )
AIRWAYS
66.8 ? 0.5 (1 d 7 h)
63.5 ? 0.5 (40 1200 )
61.5 ? 0.6 (80 1700 )
OUT OF TIME
OUT OF MEMORY
62.0 ? 0.6 (70 4300 )
SYNTHETIC
86.6 ? 1.0 (120 1000 )
46.1 ? 1.9 (10 2100 )
44.5 ? 1.2 (10 5200 )
85.4 ? 2.1 (3.4 d)
OUT OF TIME
43.3 ? 2.3 (20 800 )
Table 2: Mean classification accuracies with standard deviation for all experiments, significantly
best accuracies in bold. OUT OF MEMORY means that 100 GB memory was not enough. OUT
OF TIME indicates that the kernel computation did not finish within 30 days. Runtimes are given in
parentheses; see Section 3.1 for further runtime studies. Above, x0 y 00 means x minutes, y seconds.
remaining 10%. This experiment was repeated 10 times. Mean accuracies with standard deviations
are reported in Table 2. For each kernel and dataset, runtime is given in parentheses in Table 2.
Runtimes for the CSM kernel are not included, as this implementation was in another language.
3.1
Runtime experiments
An empirical evaluation of the runtime dependence on the parameters n, m and ? is found in Figure 2. In the top left panel, average kernel evaluation runtime was measured on datasets of 10 random
m
,
graphs with 10, 20, 30, . . . , 500 nodes each, and a density of 0.4. Density is defined as n(n?1)/2
i.e. the fraction of edges in the graph compared to the number of edges in the complete graph. In the
top right panel, the number of nodes was kept constant n = 100, while datasets of 10 random graphs
were generated with 110, 120, . . . , 500 edges each. Development of both average kernel evaluation
runtime and graph diameter is shown. In the bottom panels, the relationship between runtime and
graph diameter is shown on subsets of 100 and 200 of the real AIRWAYS and PROTEINS datasets,
respectively, for each diameter.
3.2 Results and discussion
Our experiments on ENZYMES and AIRWAYS clearly demonstrate that there are real-world classification problems where continuous-valued attributes make a big contribution to classification performance. Our experiments on SYNTHETIC demonstrate how the more discrete types of kernels,
PROP and WL, are unable to classify the graphs. Already on SYNTHETIC, which is a modest-sized
set of modest-sized graphs, CSM and SP are too computationally demanding to be practical, and on
AIRWAYS, which is a larger set of larger trees, they cannot finish in 30 days. The CSM kernel [14]
has asymptotic runtime O(knk+1 ), where k is a parameter bounding the size of subgraphs considered by the kernel, and thus in order to study subgraphs of relevant size, its runtime will be at least
as high as the shortest path kernel. Moreover, the CSM kernel requires the computation of a product
graph which, for graphs with hundreds of nodes, can cause memory problems, which we also find in
our experiments. The PROP kernel is fast; however, the reason for the computational efficiency of
PROP is that it is not really a kernel for continuous valued features ? it is a kernel for discrete features combined with a hashing scheme to discretize continuous-valued features. In our experiments,
these hashing schemes do not prove powerful enough to compete in classification accuracy with the
kernels that really do use the continuous-valued features.
While ENZYMES and AIRWAYS benefit significantly from including continuous attributes, our
experiments on PROTEINS demonstrate that there are also classification problems where the most
important information is just as well summarized in a discrete feature: here our combination of
7
4
14
3.5
3
0.3
12
0.25
10
0.2
8
0.15
6
0.1
4
0.05
2
2.5
2
1.5
1
0.5
0
0
50
100
150
200
250
300
350
400
450
0
100
500
0.7
200
250
300
350
400
450
0
500
0.035
0.6
0.03
0.5
0.025
0.4
0.02
0.3
0.015
0.2
0.01
0.1
0.005
0
12
150
14
16
18
20
22
24
0
26
0
5
10
15
20
25
Figure 2: Dependence of runtime on n, ? and m on synthetic and real graph datasets.
continuous and discrete node features gives equal classification performance as the more efficient
WL kernel using only discrete attributes.
We proved in Section 3.1 that the GraphHopper kernel has asymptotic runtime O(n2 (d+m+log n+
? 2 )), and that the average runtime for one kernel evaluation in a Gram matrix is O(n2 d) when the
number of graphs exceeds m + n + ? 2 . Our experiments in Section 3.1 empirically demonstrate how
runtime depends on the parameters n, m and ?. As m and ? are dependent parameters, the runtime
dependence on m and ? is not straightforward. An increase in the number of edges m typically
leads to an increased graph diameter ? for small m, but for more densely connected graphs, ? will
decrease with increasing m as seen in the top right panel of Figure 2. A consequence of this is
that graph diameter rarely becomes very large compared to m. The same plot also shows that the
runtime increases slowly with increasing m. Our runtime experiments clearly illustrate that while in
the worst case scenario we could have m = n2 or ? = n, this rarely happens in real-world graphs,
which are often sparse and with small diameter. Our experiments also illustrate an average runtime
quadratic in n on large datasets, as expected based on complexity analysis.
4
Conclusion
We have defined the GraphHopper kernel for graphs with any type of node attributes, presented
an efficient algorithm for computing it, and demonstrated that it outperforms state-of-the-art graph
kernels on real and synthetic data in terms of classification accuracy and/or speed. The kernels are
able to take advantage of any kind of node attributes, as they can integrate any user-defined node
kernel. Moreover, the kernel is parameter-free except for the node kernels.
This kernel opens the door to new application domains such as computer vision or medical imaging,
in which kernels that work solely on graphs with discrete attributes were too restrictive so far.
Acknowledgements
The authors wish to thank Nils Kriege for sharing his code for computing the CSM kernel, Nino Shervashidze
and Chlo?e-Agathe Azencott for sharing their preprocessed chemoinformatics data, and Asger Dirksen and
Jesper Pedersen for sharing the AIRWAYS dataset. This work is supported by the Danish Research Council for
Independent Research | Technology and Production, the Knud H?ygaard Foundation, AstraZeneca, The Danish
Council for Strategic Research, Netherlands Organisation for Scientic Research, and the DFG project ?Kernels
for Large, Labeled Graphs (LaLa)?. The research of Professor Dr. Karsten Borgwardt was supported by the
Alfried Krupp Prize for Young University Teachers of the Alfried Krupp von Bohlen und Halbach-Stiftung.
8
References
[1] N. Shervashidze, S.V.N. Vishwanathan, T. Petri, K. Mehlhorn, and K.M. Borgwardt. Efficient graphlet
kernels for large graph comparison. JMLR, 5:488?495, 2009.
[2] N. Shervashidze, P. Schweitzer, E.J. van Leeuwen, K. Mehlhorn, and K.M. Borgwardt. WeisfeilerLehman graph kernels. JMLR, 12:2539?2561, 2011.
[3] D. Haussler. Convolution kernels on discrete structures. Technical report, Department of Computer
Science, University of California at Santa Cruz, 1999.
[4] M. Collins and N. Duffy. Convolution kernels for natural language. In NIPS, pages 625?632, 2001.
[5] S.V.N. Vishwanathan and A.J. Smola. Fast kernels for string and tree matching. In NIPS, pages 569?576,
2002.
[6] C. Cortes, P. Haffner, and M. Mohri. Rational kernels: Theory and algorithms. JMLR, 5:1035?1062,
2004.
[7] D. Kimura and H. Kashima. Fast computation of subpath kernel for trees. In ICML, 2012.
[8] P. Mah?e and J.-P. Vert. Graph kernels based on tree patterns for molecules. Machine Learning, 75:3?35,
2009.
[9] H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized kernels between labeled graphs. In ICML, pages
321?328, 2003.
[10] T. G?artner, P. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In
Learning Theory and Kernel Machines, volume 2777 of LNCS, pages 129?143, 2003.
[11] S.V.N. Vishwanathan, N.N. Schraudolph, R.I. Kondor, and K.M. Borgwardt. Graph kernels. JMLR,
11:1201?1242, 2010.
[12] F.R. Bach. Graph kernels between point clouds. In ICML, pages 25?32, 2008.
[13] P. Mah?e, N. Ueda, T. Akutsu, J.-L. Perret, and J.-P. Vert. Extensions of marginalized graph kernels. In
ICML, 2004.
[14] N. Kriege and P. Mutzel. Subgraph matching kernels for attributed graphs. In ICML, 2012.
[15] B. Ga?uz`ere, L. Brun, and D. Villemin. Two new graphs kernels in chemoinformatics. Pattern Recognition
Letters, 15:2038?2047, 2012.
[16] M. Neumann, N. Patricia, R. Garnett, and K. Kersting. Efficient graph kernels by randomization. In
ECML/PKDD (1), pages 378?393, 2012.
[17] K.M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. ICDM, 2005.
[18] T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms (3. ed.). MIT Press,
2009.
[19] A. Feragen, J. Petersen, D. Grimm, A. Dirksen, J.H. Pedersen, K. Borgwardt, and M. de Bruijne. Geometric tree kernels: Classification of COPD from airway tree geometry. In IPMI 2013, 2013.
[20] N. Shervashidze. Graph kernels code, http://mlcb.is.tuebingen.mpg.de/Mitarbeiter/
Nino/Graphkernels/.
[21] D. Gleich. MatlabBGL http://dgleich.github.io/matlab-bgl/.
[22] I. Schomburg, A. Chang, C. Ebeling, M. Gremse, C. Heldt, G. Huhn, and D. Schomburg. Brenda, the
enzyme database: updates and major new developments. Nucleic Acids Research, 32:431?433, 2004.
[23] P.D. Dobson and A.J. Doig. Distinguishing enzyme structures from non-enzymes without alignments.
Journal of Molecular Biology, 330(4):771 ? 783, 2003.
[24] K.M. Borgwardt, C.S. Ong, S. Sch?onauer, S.V.N. Vishwanathan, A.J. Smola, and H.-P. Kriegel. Protein
function prediction via graph kernels. Bioinformatics, 21(suppl 1):i47?i56, 2005.
[25] J. Pedersen, H. Ashraf, A. Dirksen, K. Bach, H. Hansen, P. Toennesen, H. Thorsen, J. Brodersen, B. Skov,
M. D?ssing, J. Mortensen, K. Richter, P. Clementsen, and N. Seersholm. The Danish randomized lung
cancer CT screening trial - overall design and results of the prevalence round. J Thorac Oncol, 4(5):608?
614, May 2009.
[26] J. Petersen, M. Nielsen, P. Lo, Z. Saghir, A. Dirksen, and M. de Bruijne. Optimal graph based segmentation using flow lines with application to airway wall segmentation. In IPMI, LNCS, pages 49?60,
2011.
[27] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. Int. Syst. and
Tech., 2:27:1?27:27, 2011. Software available at http://www.csie.ntu.edu.tw/?cjlin/
libsvm.
9
| 5155 |@word trial:1 version:1 middle:1 kondor:1 flach:1 open:2 dirksen:4 recursively:3 contains:2 perret:1 outperforms:1 ka:1 comparing:4 must:2 cruz:1 shape:1 gv:15 plot:1 update:3 hash:2 leaf:2 selected:1 ith:5 prize:1 short:1 node:90 height:1 mehlhorn:2 along:7 schweitzer:1 become:1 transducer:1 prove:2 consists:1 descendant:3 artner:1 hellinger:1 x0:1 inter:1 hardness:1 expected:1 karsten:3 mpg:2 pkdd:1 uz:1 little:1 enumeration:1 increasing:2 becomes:3 project:1 pof:1 matched:1 moreover:3 panel:4 rivest:1 kind:1 string:3 finding:1 kimura:1 duplicating:1 every:3 runtime:23 exactly:2 universit:1 classifier:1 normally:1 medical:3 appear:2 planck:2 before:1 positive:1 tends:1 consequence:1 io:1 path:58 solely:1 directed:9 practical:1 recursive:4 graphlet:2 richter:1 prevalence:1 lncs:2 mortensen:1 empirical:2 java:1 significantly:2 matching:3 vert:2 dvv:13 protein:12 petersen:2 unfeasible:1 unlabeled:1 cannot:1 sheet:1 ga:1 impossible:2 knk:1 www:1 map:1 demonstrated:1 center:1 chronic:1 straightforward:1 attention:1 starting:2 subgraphs:3 haussler:1 his:1 dw:2 handle:1 coordinate:4 variation:1 sse:1 updated:1 user:2 us:2 distinguishing:1 element:1 recognition:1 labeled:3 database:2 bottom:3 cloud:1 csie:1 dg0:1 worst:2 thousand:1 tottering:1 cycle:2 connected:4 counter:1 highest:1 decrease:1 ssing:1 disease:1 und:1 complexity:14 ipmi:2 ong:1 trained:1 ov:8 depend:1 efficiency:1 easily:1 represented:1 alphabet:3 distinct:1 fast:4 jesper:1 kp:4 shervashidze:4 whose:4 encoded:1 widely:1 valued:16 solve:1 larger:3 otherwise:3 statistic:1 gi:3 advantage:3 sequence:2 mah:2 rewiring:3 grimm:1 product:3 maximal:1 aligned:1 relevant:1 subgraph:3 flexibility:1 dirac:1 parent:1 neumann:1 object:1 illustrate:2 develop:1 measured:2 ij:3 stiftung:1 implemented:3 come:3 direction:1 attribute:38 human:1 bin:1 adjacency:1 really:2 decompose:2 randomization:1 proposition:1 wall:1 ntu:1 traversed:1 extension:1 considered:1 algorithmic:1 major:1 achieves:1 adopt:1 csm:10 outperformed:1 applicable:1 label:11 combinatorial:1 hansen:1 healthy:1 council:2 edit:1 wl:11 ere:1 weighted:4 mit:1 clearly:2 gaussian:3 i56:1 onauer:1 brodersen:1 reaching:1 rather:1 kersting:1 indicates:1 tech:1 dvr:10 dependent:1 typically:3 entire:1 initially:1 ksp:3 classification:16 overall:1 stateof:1 denoted:2 development:2 art:4 initialize:4 equal:3 construct:1 once:1 atom:1 hop:1 biology:4 represents:1 runtimes:2 icml:5 petri:1 report:1 few:1 randomly:1 dg:2 simultaneously:4 densely:1 individual:2 dfg:1 geometry:1 consisting:2 ab:1 screening:1 message:6 highly:1 patricia:1 leiserson:1 dgleich:1 evaluation:4 alignment:1 permuting:3 subtrees:1 edge:28 shorter:2 modest:2 tree:19 walk:4 tsuda:1 leeuwen:1 instance:2 increased:2 kij:2 modeling:1 classify:1 measuring:1 strategic:1 cost:1 deviation:2 entry:1 subset:1 hundred:1 too:2 reported:1 kn:11 teacher:1 sv:8 synthetic:10 combined:1 borgwardt:9 eberhard:1 fundamental:1 density:2 randomized:1 receiving:2 von:1 containing:1 choose:1 possibly:1 slowly:1 dr:2 weisfeiler:2 syst:1 suggesting:1 potential:1 de:5 bold:1 summarized:1 coefficient:1 int:1 lehman:2 mv:4 vi:8 depends:2 root:7 lung:2 substructure:3 contribution:3 php:1 accuracy:7 acid:2 azencott:1 efficiently:1 generalize:1 pedersen:3 none:1 weisfeilerlehman:1 obstructive:1 suffers:1 whenever:1 ed:1 sharing:3 danish:3 definition:2 against:1 naturally:1 proof:2 attributed:4 chlo:1 propagated:1 sampled:1 rational:1 dataset:7 proved:1 popular:2 organized:1 gleich:1 nielsen:1 segmentation:2 appears:4 hashing:6 day:2 done:1 strongly:1 just:2 smola:2 agathe:1 hand:1 ebeling:1 brun:1 propagation:2 krupp:2 inspiration:1 chemical:1 alfried:2 illustrated:1 eg:1 bgl:2 round:1 width:1 rooted:3 complete:1 demonstrate:4 performs:1 karls:1 image:2 physical:2 empirically:4 endpoint:1 nh:1 million:1 comfortably:1 volume:2 measurement:2 composition:1 dag:10 rd:1 waals:1 mutzel:1 language:3 similarity:2 longer:1 gj:1 enzyme:15 graphhopper:11 scenario:1 ubingen:3 jens:1 der:1 seen:1 freely:1 shortest:37 v3:1 bruijne:2 branch:1 halbach:1 ing:1 exceeds:1 technical:1 cross:1 schraudolph:1 bach:2 lin:1 icdm:1 molecular:1 va:4 parenthesis:2 prediction:1 scalable:3 vision:2 patient:1 metric:1 iteration:3 kernel:127 represent:2 suppl:1 whereas:3 addition:2 residue:1 source:4 sends:1 sch:1 pass:2 recording:1 undirected:1 contrary:1 incorporates:1 flow:1 vw:1 counting:2 door:1 ksub:2 enough:2 variety:1 finish:2 idea:1 haffner:1 copd:2 translates:1 gb:1 suffer:1 proceed:1 passing:4 cause:1 repeatedly:1 matlab:3 generally:1 clear:1 detailed:3 santa:1 netherlands:1 stein:1 category:1 diameter:12 http:4 outperform:1 exist:1 supplied:1 per:1 dobson:2 discrete:23 shall:3 group:2 four:1 nevertheless:1 preprocessed:1 libsvm:2 diffusion:1 kept:1 v1:17 imaging:1 graph:84 asymptotically:1 fraction:1 sum:6 run:2 package:1 compete:1 powerful:1 letter:1 kasenburg:2 family:1 ueda:1 vn:21 draw:1 vb:4 marleen:2 ct:2 quadratic:2 bohlen:1 encountered:3 aasa:3 precisely:1 vishwanathan:4 looping:1 n3:1 software:2 speed:2 expanded:1 structured:3 designated:1 department:1 combination:1 cormen:1 tw:1 n4:2 making:1 happens:1 dv:7 computationally:1 turn:1 count:6 precomputed:1 slack:1 cjlin:1 end:6 sending:1 available:3 endowed:1 v2:1 occurrence:1 kashima:2 alternative:2 top:5 remaining:1 marginalized:2 hopping:3 giving:3 restrictive:1 g0:8 added:1 already:1 dependence:3 ow:1 distance:4 unable:1 thank:1 w0:3 tuebingen:2 reason:1 length:22 code:2 polarity:1 relationship:1 unfortunately:1 implementation:4 design:1 discretize:2 convolution:4 observation:1 datasets:11 nucleic:1 benchmark:2 finite:3 dijkstra:1 ecml:1 i47:1 defining:1 niklas:2 copenhagen:2 pair:1 kl:2 discretised:1 subpath:1 california:1 quadratically:1 nip:2 trans:1 able:1 suggested:1 kriegel:2 below:1 pattern:2 ev:9 sparsity:1 challenge:2 max:3 including:4 memory:6 demanding:1 natural:2 mn:2 scheme:3 github:1 technology:1 library:1 hm:1 ss:1 brenda:2 geometric:2 acknowledgement:1 relative:1 asymptotic:3 acyclic:1 hydrophobicity:1 validation:1 integrate:1 foundation:1 degree:1 s0:2 helix:1 classifying:3 production:1 row:2 cancer:1 lo:1 mohri:1 supported:2 last:1 free:2 formal:1 allow:1 vv:4 side:1 institute:2 neighbor:1 sparse:1 van:2 distributed:1 benefit:1 dimension:3 default:1 world:5 gram:1 computes:2 author:1 made:2 ec:1 far:1 social:1 clique:1 investigating:1 incoming:1 conclude:1 nino:2 chemoinformatics:3 continuous:17 decomposes:1 table:5 molecule:1 expansion:1 complex:1 domain:2 garnett:1 inherit:1 sp:5 main:1 did:1 doig:2 big:1 noise:1 arise:1 bounding:1 n2:23 inokuchi:1 repeated:2 child:1 suffering:1 amino:1 slow:1 feragen:2 position:1 sub:1 airway:18 wish:1 exponential:1 jmlr:4 young:1 minute:1 wrobel:1 bronchus:1 dk:3 svm:2 cortes:1 organisation:1 intractable:1 restricting:1 adding:2 subtree:2 duffy:1 sorting:1 kriege:4 akutsu:1 expressed:1 scalar:1 chang:2 pulmonary:1 nested:1 includ:1 extracted:1 prop:11 acm:1 sorted:1 sized:2 towards:1 professor:1 included:1 infinite:1 except:3 diff:3 lemma:6 total:4 nil:1 secondary:1 experimental:1 heldt:1 rarely:3 exception:1 support:2 scan:1 collins:1 bioinformatics:3 ashraf:1 tested:1 handling:1 |
4,593 | 5,156 | Near-optimal Anomaly Detection in Graphs
using Lov?asz Extended Scan Statistic
Akshay Krishnamurthy
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
James Sharpnack
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Aarti Singh
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
The detection of anomalous activity in graphs is a statistical problem that arises in
many applications, such as network surveillance, disease outbreak detection, and
activity monitoring in social networks. Beyond its wide applicability, graph structured anomaly detection serves as a case study in the difficulty of balancing computational complexity with statistical power. In this work, we develop from first
principles the generalized likelihood ratio test for determining if there is a well
connected region of activation over the vertices in the graph in Gaussian noise.
Because this test is computationally infeasible, we provide a relaxation, called the
Lov?asz extended scan statistic (LESS) that uses submodularity to approximate the
intractable generalized likelihood ratio. We demonstrate a connection between
LESS and maximum a-posteriori inference in Markov random fields, which provides us with a poly-time algorithm for LESS. Using electrical network theory,
we are able to control type 1 error for LESS and prove conditions under which
LESS is risk consistent. Finally, we consider specific graph models, the torus, knearest neighbor graphs, and ?-random graphs. We show that on these graphs our
results provide near-optimal performance by matching our results to known lower
bounds.
1
Introduction
Detecting anomalous activity refers to determining if we are observing merely noise (business as
usual) or if there is some signal in the noise (anomalous activity). Classically, anomaly detection
focused on identifying rare behaviors and aberrant bursts in activity over a single data source or
channel. With the advent of large surveillance projects, social networks, and mobile computing,
data sources often are high-dimensional and have a network structure. With this in mind, statistics
needs to comprehensively address the detection of anomalous activity in graphs. In this paper, we
will study the detection of elevated activity in a graph with Gaussian noise.
In reality, very little is known about the detection of activity in graphs, despite a variety of real-world
applications such as activity detection in social networks, network surveillance, disease outbreak detection, biomedical imaging, sensor network detection, gene network analysis, environmental monitoring and malware detection. Sensor networks might be deployed for detecting nuclear substances,
water contaminants, or activity in video surveillance. By exploiting the sensor network structure
1
(based on proximity), one can detect activity in networks when the activity is very faint. Recent
theoretical contributions in the statistical literature[1, 2] have detailed the inherent difficulty of such
a testing problem but have positive results only under restrictive conditions on the graph topology.
By combining knowledge from high-dimensional statistics, graph theory and mathematical programming, the characterization of detection algorithms over any graph topology by their statistical
properties is possible.
Aside from the statistical challenges, the computational complexity of any proposed algorithms
must be addressed. Due to the combinatorial nature of graph based methods, problems can easily
shift from having polynomial-time algorithms to having running times exponential in the size of
the graph. The applications of graph structured inference require that any method be scalable to
large graphs. As we will see, the ideal statistical procedure will be intractable, suggesting that
approximation algorithms and relaxations are necessary.
1.1
Problem Setup
Consider a connected, possibly weighted, directed graph G defined by a set of vertices V (|V | = p)
and directed edges E (|E| = m) which are ordered pairs of vertices. Furthermore, the edges may be
assigned weights, {We }e?E , that determine the relative strength of the interactions of the adjacent
vertices. For each vertex, i ? V , we assume that there is an observation yi that has a Normal
distribution with mean xi and variance 1. This is called the graph-structured normal means problem,
and we observe one realization of the random vector
y = x + ?,
(1)
where x ? Rp , ? ? N (0, Ip?p ). The signal x will reflect the assumption that there is an active
cluster (C ? V ) in the graph, by making xi > 0 if i ? C and xi = 0 otherwise. Furthermore,
the allowable clusters, C, must have a small boundary in the graph. Specifically, we assume that
there are parameters ?, ? (possibly dependent on p such that the class of graph-structured activation
patterns x is given as follows.
(
)
?
X = x : x = p 1C , C ? C , C = {C ? V : out(C) ? ?}
|C|
P
? is the total weight of edges leaving the cluster C.
Here out(C) = (u,v)?E Wu,v I{u ? C, v ? C}
In other words, the set of activated vertices C have a small cut size in the graph G. While we assume
that the noise variance is 1 in (1), this is equivalent to the more general model in which E?i2 = ? 2
with ? known. If we wanted to consider known ? 2 then we would apply all our algorithms to y/?
and replace ? with ?/? in all of our statements. For this reason, we call ? the signal-to-noise ratio
(SNR), and proceed with ? = 1.
In graph-structured activation detection we are concerned with statistically testing the null against
the alternative hypotheses,
H0 : y ? N (0, I)
(2)
H1 : y ? N (x, I), x ? X
H0 represents business as usual (such as sensors returning only noise) while H1 encompasses all of
the foreseeable anomalous activity (an elevated group of noisy sensor observations). Let a test be a
mapping T (y) ? {0, 1}, where 1 indicates that we reject the null. It is imperative that we control
both the probability of false alarm, and the false acceptance of the null. To this end, we define our
measure of risk to be
R(T ) = E0 [T ] + sup Ex [1 ? T ]
x?X
where Ex denote the expectation with respect to y ? N (x, I). These terms are also known as the
probability of type 1 and type 2 error respectively. This setting should not be confused with the
Bayesian testing setup (e.g. as considered in [2, 3]) where the patterns, x, are drawn at random.
We will say that H0 and H1 are asymptotically distinguished by a test, T , if in the setting of large
graphs, limp?? R(T ) = 0. If such a test exists then H0 and H1 are asymptotically distinguishable,
otherwise they are asymptotically indistinguishable (which occurs whenever the risk does not tend
to 0). We will be characterizing regimes for ? in which our test asymptotically distinguishes H0
from H1 .
2
Throughout the study, let the edge-incidence matrix of G be ? ? Rm?p such that for e = (v, w) ?
E, ?e,v = ?We , ?e,w = We and is 0 elsewhere. For directed graphs, vertex degrees refer to dv =
out({v}). Let k.k denote the ?2 norm, k.k1 be the ?1 norm, and (x)+ be the positive components
of the vector x. Let [p] = {1, . . . , p}, and we will be using the o notation, namely if non-negative
sequences satisfy an /bn ? 0 then an = o(bn ) and bn = ?(an ).
1.2
Contributions
Section 3 highlights what is known about the hypothesis testing problem 2, particularly we provide
a regime for ? in which H0 and H1 are asymptotically indistinguishable. In section 4.1, we derive
the graph scan statistic from the generalized likelihood ratio principle which we show to be a computationally intractable procedure. In section 4.2, we provide a relaxation of the graph scan statistic
(GSS), the Lov?asz extended scan statistic (LESS), and we show that it can be computed with successive minimum s ? t cut programs (a graph cut that separates a source vertex from a sink vertex).
In section 5, we give our main result, Theorem 5, that provides a type 1 error control for both test
statistics, relating their performance to electrical network theory. In section 6, we show that GSS
and LESS can asymptotically distinguish H0 and H1 in signal-to-noise ratios close to the lowest
possible for some important graph models. All proofs are in the Appendix.
2
Related Work
Graph structured signal processing. There have been several approaches to signal processing over
graphs. Markov random fields (MRF) provide a succinct framework in which the underlying signal
is modeled as a draw from an Ising or Potts model [4, 5]. We will return to MRFs in a later section,
as it will relate to our scan statistic. A similar line of research is the use of kernels over graphs. The
study of kernels over graphs began with the development of diffusion kernels [6], and was extended
through Green?s functions on graphs [7]. While these methods are used to estimate binary signals
(where xi ? {0, 1}) over graphs, little is known about their statistical properties and their use in
signal detection. To the best of our knowledge, this paper is the first connection made between
anomaly detection and MRFs.
Normal means testing. Normal means testing in high-dimensions is a well established and fundamental problem in statistics. Much is known when H1 derives from a smooth function space such as
Besov spaces or Sobolev spaces[8, 9]. Only recently have combinatorial structures such as graphs
been proposed as the underlying structure of H1 . A significant portion of the recent work in this area
[10, 3, 1, 2] has focused on incorporating structural assumptions on the signal, as a way to mitigate
the effect of high-dimensionality and also because many real-life problems can be represented as
instances of the normal means problem with graph-structured signals (see, for an example, [11]).
Graph scan statistics. In spatial statistics, it is common, when searching for anomalous activity
to scan over regions in the spatial domain, testing for elevated activity[12, 13]. There have been
scan statistics proposed for graphs, most notably the work of [14] in which the authors scan over
neighborhoods of the graphs defined by the graph distance. Other work has been done on the theory
and algorithms for scan statistics over specific graph models, but are not easily generalizable to
arbitrary graphs [15, 1]. More recently, it has been found that scanning over all well connected
regions of a graph can be computationally intractable, and so approximations to the intractable
likelihood-based procedure have been studied [16, 17]. We follow in this line of work, with a
relaxation to the intractable generalized likelihood ratio test.
3
A Lower Bound and Known Results
In this section we highlight the previously known results about the hypothesis testing problem (2).
This problem was studied in [17], in which the authors demonstrated the following lower bound,
which derives from techniques developed in [3].
Theorem 1. [17] Hypotheses H0 and H1 defined in Eq. (2) are asymptotically indistinguishable if
s
!
2
?
pdmax
?
?=o
min
, p
log
dmax
?2
where dmax is the maximum degree of graph G.
3
Now that a regime of asymptotic indistinguishability has been established, it is instructive to consider
test statistics that do not take the graph into account (viz. the statistics are unaffected by a change
in the graph structure). Certainly, if we are in a situation where a naive procedure perform nearoptimally, then our study is not warranted. As it turns out, there is a gap between the performance
of the natural unstructured tests and the lower bound in Theorem 1.
Proposition 2. [17] (1) The thresholding test statistic, maxv?[p] |yv |, asymptotically distinguishes
H0 from H1 if ? = ?(|C|P
log(p/|C|)).
(2) The sum test statistic, v?[p] yv , asymptotically distinguishes H0 from H1 if ? = ?(p/|C|).
As opposed to these naive tests one can scan over all clusters in C performing individual likelihood
ratio tests. This is called the scan statistic, and it is known to be a computationally intractable
combinatorial optimization. Previously, two alternatives to the scan statistic have been developed:
the spectral scan statistic [16], and one based on the uniform spanning tree wavelet basis [17]. The
former is indeed a relaxation of the ideal, computationally intractable, scan statistic, but in many
important graph topologies, such as the lattice, provides sub-optimal statistical performance. The
uniform spanning tree wavelets in effect allows one to scan over a subclass of the class, C, but tends
to provide worse performance (as we will see in section 6) than that presented in this work. The
theoretical results in [17] are similar to ours, but they suffer additional log-factors.
4
Method
As we have noted the fundamental difficulty of the hypothesis testing problem is the composite
nature of the alternative hypothesis. Because the alternative is indexed by sets, C ? C(?), with a
low cut size, it is reasonable that the test statistic that we will derive results from a combinatorial
optimization program. In fact, we will show we can express the generalized likelihood ratio (GLR)
statistic in terms of a modular program with submodular constraints. This will turn out to be a
possibly NP-hard program, as a special case of such programs is the well known knapsack problem
[18]. With this in mind, we provide a convex relaxation, using the Lov?asz extension, to the ideal
GLR statistic. This relaxation conveniently has a dual objective that can be evaluated with a binary
Markov random field energy minimization, which is a well understood program. We will reserve
the theoretical statistical analysis for the following section.
Submodularity. Before we proceed, we will introduce the reader to submodularity and the Lov?asz
extension. (A very nice introduction to submodularity can be found in [19].) For any set, which we
may as well take to be the vertex set [p], we say that a function F : {0, 1}p ? R is submodular
if for any A, B ? [p], F (A) + F (B) ? F (A ? B) + F (A ? B). (We will interchangeably use
the bijection between 2[p] and {0, 1}p defined by C ? 1C .) In this way, a submodular function
experiences diminishing returns, as additions to large sets tend to be less dramatic than additions to
small sets. But while this diminishing returns phenomenon is akin to concave functions, for optimization purposes submodularity acts like convexity, as it admits efficient minimization procedures.
Moreover, for every submodular function there is a Lov?asz extension f : [0, 1]p ? R defined in the
following way: for x ? [0, 1]p let xji denote the ith largest element of x, then
f (x) = xj1 F ({j1 }) +
p
X
i=2
(F ({j1 , . . . , ji }) ? F ({j1 , . . . , ji?1 }))xji
Submodular functions as a class is similar to convex functions in that it is closed under addition and
non-negative scalar multiplication. The following facts about Lov?asz extensions will be important.
Proposition 3. [19] Let F be submodular and f be its Lov?asz extension. Then f is convex, f (x) =
F (x) if x ? {0, 1}p , and
min{F (x) : x ? {0, 1}p } = min{f (x) : x ? [0, 1]p }
We are now sufficiently prepared to develop the test statistics that will be the focus of this paper.
4.1
Graph Scan Statistic
It is instructive, when faced with a class of probability distributions, indexed by subsets C ? 2[p] ,
to think about what techniques we would use if we knew the correct set C ? C (which is often
called oracle information). One would in this case be only testing the null hypothesis H0 : x = 0
4
against the simple alternative H1 : x ? 1C . In this situation, we would employ the likelihood
ratio test because by the Neyman-Pearson lemma it is the uniformly most powerful
ptest statistic.
?
The maximum likelihood estimator for x is 1C 1?
y/|C|
(the
MLE
of
?
is
1
y/
|C|) and the
C
C
likelihood ratio turns out to be
(
2 )
? 2
1
1
1C 1?
(1C y)
2
Cy
exp ? kyk / exp ?
= exp
? y
2
2
|C|
2|C|
2
2
Hence, the log-likelihood ratio is proportional to (1?
C y) /|C| and thresholding this at z1??/2 gives
us a size ? test.
This reasoning has been subject to the assumption that we had oracle knowledge of C. A
natural statistic, when C is unknown, is the generalized log-likelihood ratio (GLR) defined by
2
max(1?
C y) /|C| s.t. C ? C. We will work with the graph scan statistic (GSS),
1? y
s? = max pC s.t. C ? C(?) = {C : out(C) ? ?}
|C|
(3)
which is nearly equivalent to the GLR. (We can in fact evaluate s? for y and ?y, taking a maximum
and obtain the GLR, but statistically this is nearly the same.) Notice that there is no guarantee that
the program above is computationally feasible. In fact, it belongs to a class of programs, specifically
modular programs with submodular constraints that is known to contain NP-hard instantiations,
such as the ratio cut program and the knapsack program [18]. Hence, we are compelled to form a
relaxation of the above program, that will with luck provide a feasible algorithm.
4.2
Lov?asz Extended Scan Statistic
It is common, when faced with combinatorial optimization programs that are computationally infeasible, to relax the domain from the discrete {0, 1}p to a continuous domain, such as [0, 1]p .
Generally, the hope is that optimizing the relaxation will approximate the combinatorial program
well. First we require that we can relax the constraint out(C) ? ? to the hypercube [0, 1]p . This
will be accomplished by replacing it with its Lov?asz extension k(?x)+ k1 ? ?. We then form the
relaxed program, which we will call the Lov?asz extended scan statistic (LESS),
?
?l = max max x? y s.t. x ? X (?, t) = {x ? [0, 1]p : k(?x)+ k1 ? ?, 1? x ? t}
(4)
t?[p] x
t
We will find that not only can this be solved with a convex program, but the dual objective is a
minimum binary Markov random field energy program. To this end, we will briefly go over binary
Markov random fields, which we will find can be used to solve our relaxation.
Binary Markov Random Fields. Much of the previous work on graph structured statistical procedures assumes a Markov random field (MRF) model, in which there are discrete labels assigned to
each vertex in [p], and the observed variables {yv }v?[p] are conditionally independent given these
labels. Furthermore, the prior distribution on the labels is drawn according to an Ising model (if
the labels are binary) or a Potts model otherwise. The task is to then compute a Bayes rule from
the posterior of the MRF. The majority of the previous work assumes that we are interested in the
maximum a-posteriori (MAP) estimator, which is the Bayes rule for the 0/1-loss. This can generally
be written in the form,
X
X
?lv (xv |yv ) +
min p
Wv,u I{xv 6= xu }
x?{0,1}
v?[p]
v6=u?[p]
where lv is a data dependent log-likelihood. Such programs are called graph-representable in [20],
and are known to be solvable in the binary case with s-t graph cuts. Thus, by the min-cut max-flow
theorem the value of the MAP objective can be obtained by computing a maximum flow. More
recently, a dual-decomposition algorithm has been developed in order to parallelize the computation
of the MAP estimator for binary MRFs [21, 22].
We are now ready to state our result regarding the dual form of the LESS program, (4).
Proposition 4. Let ?0 , ?1 ? 0, and define the dual function of the LESS,
g(?0 , ?1 ) = max p y? x ? ?0 1? x ? ?1 k?xk0
x?{0,1}
5
The LESS estimator is equal to the following minimum of convex optimizations
?l = max ?1 min g(?0 , ?1 ) + ?0 t + ?1 ?
t?[p]
t ?0 ,?1 ?0
g(?0 , ?1 ) is the objective of a MRF MAP problem, which is poly-time solvable with s-t graph cuts.
5
Theoretical Analysis
So far we have developed a lower bound to the hypothesis testing problem, shown that some common detectors do not meet this guarantee, and developed the Lov?asz extended scan statistic from
first principles. We will now provide a thorough statistical analysis of the performance of LESS.
Previously, electrical network theory, specifically the effective resistances of edges in the graph,
has been useful in describing the theoretical performance of a detector derived from uniform spanning tree wavelets [17]. As it turns out the performance of LESS is also dictated by the effective
resistances of edges in the graph.
Effective Resistance. Effective resistances have been extensively studied in electrical network theory [23]. We define the combinatorial Laplacian of G to be ? = D ? W (Dv,v = out({v}) is the
diagonal degree matrix). A potential difference is any z ? R|E| such that it satisfies Kirchoff?s potential law: the total potential difference around any cycle is 0. Algebraically, this means that ?x ? Rp
such that ?x = z. The Dirichlet principle states that any solution to the following program gives
an absolute potential x that satisfies Kirchoff?s law:
minx x? ?x s.t. xS = vS
for source/sinks S ? [p] and some voltage constraints vS ? R|S| . By Lagrangian calculus, the
solution to the above program is given by x = ?? v where v is 0 over S C and vS over S, and ?
indicates the Moore-Penrose pseudoinverse. The effective resistance between a source v ? V and
a sink w ? V is the potential difference required to create a unit flow between them. Hence, the
effective resistance between v and w is rv,w = (?v ? ?w )? ?? (?v ? ?w ), where ?v is the Dirac delta
function. There is a close connection between effective resistances and random spanning trees. The
uniform spanning tree (UST) is a random spanning tree, chosen uniformly at random from the set of
all distinct spanning trees. The foundational Matrix-Tree theorem [24, 23] states that the probability
of an edge, e, being included in the UST is equal to the edge weight times the effective resistance
We re . The UST is an essential component of the proof of our main theorem, in that it provides a
mechanism for unravelling the graph while still preserving the connectivity of the graph.
We are now in a position to state the main theorem, which will allow us to control the type 1 error
(the probability of false alarm) of both the GSS and its relaxation the LESS.
P
Theorem 5. Let rC = max{ (u,v)?E:u?C Wu,v r(u,v) : C ? C} be the maximum effective resistance of the boundary of a cluster C. The following statements hold under the null hypothesis
H0 : x = 0:
1. The graph scan statistic, with probability at least 1 ? ?, is smaller than
!
r
p
p
p
?
1
s? ?
rC +
2 log(p ? 1) + 2 log 2 + 2 log(1/?)
log p
2
(5)
2. The Lov?asz extended scan statistic, with probability at least 1 ? ? is smaller than
v
!2
u
r
u ?
log(2p)
+
1
1
t
?l ? r
log p log p
+2
rC +
q
2
?
2
1
rC + 2 log p log p
p
p
+ 2 log p + 2 log(1/?)
(6)
The implication of Theorem 5 is that the size of the test may be controlled at level ? by selecting
thresholds given by (5) and (6) for GSS and LESS respectively. Notice that the control provided
for the LESS is not significantly different from that of the GSS. This is highlighted by the following
Corollary, which combines Theorem 5 with a type 2 error bound to produce an information theoretic
guarantee for the asymptotic performance of the GSS and LESS.
6
Corollary 6. Both the GSS and the LESS asymptotically distinguish H0 from H1 if
p
?
= ? max{ rC log p, log p}
?
To summarize we have established that the performance of the GSS and the LESS are dictated by
the effective resistances of cuts in the graph. While the condition in Cor. 6 may seem mysterious,
the guarantee in fact nearly matches the lower bound for many graph models as we now show.
6
Specific Graph Models
Theorem 5 shows that the effective resistance of the boundary plays a critical role in characterizing
the distinguishability region of both the the GSS and LESS. On specific graph families, we can
compute the effective resistances precisely, leading to concrete detection guarantees that we will see
nearly matches the lower bound in many cases. Throughout this section, we will only be working
with undirected, unweighted graphs.
?
Recall that Corollary 6 shows that an SNR of ? rC log p is sufficient while Theorem 1 shows
p
that ?
?/dmax log p is necessary for detection. Thus if we can show that rC ? ?/dmax , we
would establish the near-optimality of both the GSS and LESS. Foster?s theorem lends evidence to
the fact that the effective resistances should be much smaller than the cut size:
Theorem 7. (Foster?s Theorem [25, 26])
X
re = p ? 1
e?E
Roughly speaking, the effective resistance of an edge selected uniformly at random is ? (p?1)/m =
d?1
ave so the effective resistance of a cut is ? ?/dave . This intuition can be formalized for specific
models and this improvement by the average degree bring us much closer to the lower bound.
6.1
Edge Transitive Graphs
An edge transitive graph, G, is one for which there is a graph automorphism mapping e0 to e1 for any
pair of edges e0 , e1 . Examples include the l-dimensional torus, the cycle, and the complete graph
Kp . The existence of these automorphisms implies that every edge has the same effective resistance,
and by Foster?s theorem, we know that these resistances are exactly (p ? 1)/m. Moreover, since
edge transitive graphs must be d-regular, we know that m = ?(pd) so that re = ?(1/d). Thus as
a corollary to Theorem 5 we have that
? both the GSS and LESS are near-optimal (optimal modulo
logarithmic factors whenever ?/d ? p) on edge transitive graphs:
Corollary 8. Let G be an edge-transitive graph with common degree d. Then both the GSS and
LESS distinguish H0 from H1 provided that:
p
? = ? max{ ?/d log p, log p}
6.2
Random Geometric Graphs
Another popular family of graphs are those constructed from a set of points in RD drawn according
to some density. These graphs have inherent randomness stemming from sampling of the density,
and thus earn the name random geometric graphs. The two most popular such graphs are symmetric
k-nearest neighbor graphs and ?-graphs. We characterize the distinguishability region for both.
In both cases, a set of points z1 , . . . , zp are drawn i.i.d. from a density f support over RD , or a subset
of RD . Our results require mild regularity conditions on f , which, roughly speaking, require that
supp(f ) is topologically equivalent to the cube and has density bounded away from zero (See [27]
for a precise definition). To form a k-nearest neighbor graph Gk , we associate each vertex i with a
point zi and we connect vertices i, j if zi is amongst the k-nearest neighbors, in ?2 , of zj or vice
versa. In the the ?-graph, G? we connect vertices i, j if ||zi , zj || ? ? for some metric ? .
The relationship re ? 1/d, which we used for edge-transitive graphs, was derived in Corollaries 8
and 9 in [27] The precise concentration arguments, which have been done before [17], lead to the
following corollary regarding the performance of the GSS and LESS on random geometric graphs:
7
Figure 1: A comparison of detection procedures: spectral scan statistic (SSS), UST wavelet detector
(Wavelet), and LESS. The graphs used are the square 2D Torus, kNN graph (k ? p1/4 ), and ?-graph
(with ? ? p?1/3 ); with ? = 4, 4, 3 respectively, p = 225, and |C| ? p1/2 .
Corollary 9. Let Gk be a k-NN graph with k/p ? 0, k(k/p)2/D ? ? and suppose the density
f meets the regularity conditions in [27]. Then both the GSS and LESS distinguish H0 from H1
provided that:
p
? = ? max{ ?/k log p, log p}
If G? is an ?-graph with ? ? 0, n?D+2 ? ? then both distinguish H0 from H1 provided that:
r
?
? = ? max
log
p,
log
p
p?D
The corollary follows immediately form Corollary 6 and the proofs in [17]. Since under the regularity conditions, the maximum degree is ?(k) and ?(p?D ) in k-NN?and ?-graphs respectively, the
corollary establishes the near optimality (again provided that ?/d ? p) of both test statistics.
We performed some experiments using the MRF based algorithm outlined in Prop. 4. Each experiment is made with graphs with 225 vertices, and we report the true positive rate versus the false
positive rate as the threshold varies (also known as the ROC.) For each graph model, LESS provides
gains over the spectral scan statistic[16] and the UST wavelet detector[17], each of the gains are
significant except for the ?-graph which is more modest.
7
Conclusions
To summarize, while Corollary 6 characterizes the performance of GSS and LESS in terms of effective resistances, in many specific graph models, this can be translated into near-optimal detection
guarantees for these test statistics. We have demonstrated that the LESS provides guarantees similar
to that of the computationally intractable generalized likelihood ratio test (GSS). Furthermore, the
LESS can be solved through successive graph cuts by relating it to MAP estimation in an MRF.
Future work includes using these concepts for localizing the activation, making the program robust
to missing data, and extending the analysis to non-Gaussian error.
Acknowledgments
This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant
IIS-1116458. AK is supported in part by a NSF Graduate Research Fellowship. We would like to
thank Sivaraman Balakrishnan for his valuable input in the theoretical development of the paper.
References
[1] E. Arias-Castro, E.J. Candes, and A. Durand. Detection of an anomalous cluster in a network. The Annals
of Statistics, 39(1):278?304, 2011.
[2] L. Addario-Berry, N. Broutin, L. Devroye, and G. Lugosi. On combinatorial testing problems. The Annals
of Statistics, 38(5):3063?3092, 2010.
[3] E. Arias-Castro, E.J. Candes, H. Helgason, and O. Zeitouni. Searching for a trail of evidence in a maze.
The Annals of Statistics, 36(4):1726?1757, 2008.
[4] V. Cevher, C. Hegde, M.F. Duarte, and R.G. Baraniuk. Sparse signal recovery using markov random
fields. Technical report, DTIC Document, 2009.
8
[5] P. Ravikumar and J.D. Lafferty. Quadratic programming relaxations for metric labeling and markov
random field map estimation. 2006.
[6] R.I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Proceedings
of the Nineteenth International Conference on Machine Learning, pages 315?322. Citeseer, 2002.
[7] A. Smola and R. Kondor. Kernels and regularization on graphs. Learning theory and kernel machines,
pages 144?158, 2003.
[8] Y.I. Ingster. Minimax testing of nonparametric hypotheses on a distribution density in the lp metrics.
Theory of Probability and its Applications, 31:333, 1987.
[9] Y.I. Ingster and I.A. Suslina. Nonparametric goodness-of-fit testing under Gaussian models, volume 169.
Springer Verlag, 2003.
[10] E. Arias-Castro, D. Donoho, and X. Huo. Near-optimal detection of geometric objects by fast multiscale
methods. IEEE Trans. Inform. Theory, 51(7):2402?2425, 2005.
[11] L. Jacob, P. Neuvial, and S. Dudoit. Gains in power from structured two-sample tests of means on graphs.
Arxiv preprint arXiv:1009.5173, 2010.
[12] Daniel B Neill and Andrew W Moore. Rapid detection of significant spatial clusters. In Proceedings
of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages
256?265. ACM, 2004.
[13] Deepak Agarwal, Andrew McGregor, Jeff M Phillips, Suresh Venkatasubramanian, and Zhengyuan Zhu.
Spatial scan statistics: approximations and performance study. In Proceedings of the 12th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 24?33. ACM, 2006.
[14] Carey E Priebe, John M Conroy, David J Marchette, and Youngser Park. Scan statistics on enron graphs.
Computational & Mathematical Organization Theory, 11(3):229?247, 2005.
[15] Chih-Wei Yi. A unified analytic framework based on minimum scan statistics for wireless ad hoc and
sensor networks. Parallel and Distributed Systems, IEEE Transactions on, 20(9):1233?1245, 2009.
[16] J. Sharpnack, A. Rinaldo, and A. Singh. Changepoint detection over graphs with the spectral scan statistic.
Arxiv preprint arXiv:1206.0773, 2012.
[17] James Sharpnack, Akshay Krishnamurthy, and Aarti Singh. Detecting activations over graphs using
spanning tree wavelet bases. arXiv preprint arXiv:1206.0937, 2012.
[18] Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity.
Courier Dover Publications, 1998.
[19] Francis Bach. Convex analysis and optimization with submodular functions: a tutorial. arXiv preprint
arXiv:1010.4207, 2010.
[20] Vladimir Kolmogorov and Ramin Zabin. What energy functions can be minimized via graph cuts? Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 26(2):147?159, 2004.
[21] Petter Strandmark and Fredrik Kahl. Parallel and distributed graph cuts by dual decomposition. In
Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2085?2092. IEEE,
2010.
[22] David Sontag, Amir Globerson, and Tommi Jaakkola. Introduction to dual decomposition for inference.
Optimization for Machine Learning, 1, 2011.
[23] R. Lyons and Y. Peres. Probability on trees and networks. Book in preparation., 2000.
[24] G. Kirchhoff. Ueber die aufl?osung der gleichungen, auf welche man bei der untersuchung der linearen
vertheilung galvanischer str?ome gef?uhrt wird. Annalen der Physik, 148(12):497?508, 1847.
[25] R.M. Foster. The average impedance of an electrical network. Contributions to Applied Mechanics
(Reissner Anniversary Volume), pages 333?340, 1949.
[26] P. Tetali. Random walks and the effective resistance of networks. Journal of Theoretical Probability,
4(1):101?109, 1991.
[27] Ulrike Von Luxburg, Agnes Radl, and Matthias Hein. Hitting and commute times in large graphs are
often misleading. ReCALL, 2010.
[28] R Tyrell Rockafellar. Convex analysis, volume 28. Princeton university press, 1997.
[29] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms.
MIT press, 2001.
[30] Wai Shing Fung and Nicholas JA Harvey. Graph sparsification by edge-connectivity and random spanning
trees. arXiv preprint arXiv:1005.0265, 2010.
[31] Michel Ledoux. The concentration of measure phenomenon, volume 89. American Mathematical Soc.,
2001.
9
| 5156 |@word mild:1 briefly:1 kondor:2 polynomial:1 norm:2 physik:1 calculus:1 bn:3 decomposition:3 jacob:1 citeseer:1 commute:1 dramatic:1 venkatasubramanian:1 selecting:1 daniel:1 ours:1 document:1 kahl:1 aberrant:1 com:1 incidence:1 activation:5 gmail:1 must:3 written:1 ust:5 stemming:1 john:1 ronald:1 limp:1 j1:3 analytic:1 wanted:1 maxv:1 aside:1 v:3 intelligence:1 selected:1 kyk:1 amir:1 compelled:1 huo:1 ith:1 dover:1 fa9550:1 foreseeable:1 provides:6 detecting:3 characterization:1 bijection:1 successive:2 mathematical:3 burst:1 rc:7 constructed:1 prove:1 combine:1 introduce:1 lov:13 notably:1 indeed:1 rapid:1 roughly:2 p1:2 xji:2 mechanic:1 behavior:1 little:2 lyon:1 str:1 project:1 confused:1 notation:1 underlying:2 moreover:2 provided:5 advent:1 null:5 what:3 lowest:1 bounded:1 rivest:1 generalizable:1 developed:5 unified:1 sparsification:1 guarantee:7 mitigate:1 every:2 thorough:1 subclass:1 concave:1 act:1 exactly:1 returning:1 rm:1 control:5 indistinguishability:1 unit:1 grant:2 positive:4 before:2 understood:1 tends:1 xv:2 despite:1 ak:1 parallelize:1 meet:2 lugosi:1 might:1 studied:3 graduate:1 statistically:2 directed:3 acknowledgment:1 globerson:1 testing:14 procedure:7 suresh:1 foundational:1 area:1 reject:1 composite:1 matching:1 significantly:1 word:1 courier:1 refers:1 regular:1 close:2 risk:3 equivalent:3 map:6 demonstrated:2 lagrangian:1 missing:1 hegde:1 go:1 convex:7 focused:2 formalized:1 identifying:1 unstructured:1 immediately:1 recovery:1 estimator:4 rule:2 nuclear:1 his:1 searching:2 krishnamurthy:2 annals:3 play:1 suppose:1 modulo:1 anomaly:4 programming:2 us:1 automorphisms:1 hypothesis:10 trail:1 pa:3 element:1 associate:1 recognition:1 particularly:1 cut:14 ising:2 jsharpna:1 observed:1 role:1 preprint:5 electrical:5 solved:2 cy:1 region:5 connected:3 cycle:2 automorphism:1 besov:1 luck:1 valuable:1 disease:2 intuition:1 pd:1 convexity:1 complexity:3 instructive:2 singh:3 basis:1 sink:3 translated:1 easily:2 kirchhoff:1 represented:1 kolmogorov:1 distinct:1 fast:1 effective:18 kp:1 labeling:1 neighborhood:1 h0:16 pearson:1 modular:2 solve:1 nineteenth:1 say:2 relax:2 otherwise:3 cvpr:1 statistic:46 knn:1 knearest:1 think:1 highlighted:1 noisy:1 ip:1 hoc:1 sequence:1 ledoux:1 matthias:1 interaction:1 ome:1 combining:1 realization:1 dirac:1 exploiting:1 cluster:7 regularity:3 zp:1 extending:1 produce:1 object:1 derive:2 develop:2 andrew:2 nearest:3 eq:1 soc:1 c:2 fredrik:1 implies:1 tommi:1 submodularity:5 correct:1 require:4 ja:1 unravelling:1 tyrell:1 proposition:3 extension:6 hold:1 proximity:1 sufficiently:1 considered:1 zhengyuan:1 normal:5 exp:3 around:1 mapping:2 kirchoff:2 reserve:1 changepoint:1 aarti:3 purpose:1 estimation:2 combinatorial:9 label:4 sivaraman:1 largest:1 vice:1 create:1 establishes:1 weighted:1 minimization:2 hope:1 mit:1 sensor:6 gaussian:4 surveillance:4 mobile:1 voltage:1 publication:1 jaakkola:1 corollary:12 derived:2 focus:1 viz:1 improvement:1 potts:2 sharpnack:3 likelihood:14 indicates:2 ave:1 sigkdd:2 detect:1 duarte:1 posteriori:2 inference:3 dependent:2 mrfs:3 nn:2 diminishing:2 interested:1 dual:7 development:2 spatial:4 special:1 cube:1 field:9 equal:2 having:2 sampling:1 represents:1 park:1 nearly:4 future:1 papadimitriou:1 np:2 report:2 minimized:1 inherent:2 employ:1 distinguishes:3 individual:1 detection:24 organization:1 acceptance:1 mining:2 leiserson:1 certainly:1 pc:1 activated:1 glr:5 implication:1 edge:18 closer:1 necessary:2 experience:1 modest:1 tree:11 indexed:2 walk:1 re:4 e0:3 hein:1 theoretical:7 cevher:1 instance:1 localizing:1 goodness:1 lattice:1 applicability:1 vertex:15 imperative:1 rare:1 snr:2 uniform:4 subset:2 characterize:1 connect:2 scanning:1 varies:1 density:6 fundamental:2 international:3 contaminant:1 concrete:1 earn:1 connectivity:2 clifford:1 again:1 reflect:1 von:1 opposed:1 shing:1 possibly:3 agnes:1 classically:1 worse:1 book:1 american:1 leading:1 return:3 michel:1 supp:1 suggesting:1 account:1 potential:5 includes:1 rockafellar:1 satisfy:1 ad:1 later:1 h1:17 performed:1 closed:1 observing:1 sup:1 portion:1 yv:4 bayes:2 characterizes:1 parallel:2 francis:1 candes:2 ulrike:1 carey:1 contribution:3 square:1 variance:2 bayesian:1 monitoring:2 unaffected:1 dave:1 randomness:1 detector:4 inform:1 whenever:2 wai:1 definition:1 against:2 energy:3 mysterious:1 james:2 proof:3 gain:3 popular:2 recall:2 knowledge:5 dimensionality:1 pdmax:1 follow:1 wei:1 done:2 evaluated:1 furthermore:4 ptest:1 biomedical:1 smola:1 working:1 replacing:1 multiscale:1 name:1 effect:2 xj1:1 contain:1 true:1 concept:1 former:1 hence:3 assigned:2 regularization:1 symmetric:1 moore:2 i2:1 conditionally:1 adjacent:1 indistinguishable:3 interchangeably:1 noted:1 die:1 generalized:7 allowable:1 theoretic:1 demonstrate:1 complete:1 bring:1 reasoning:1 recently:3 charles:1 began:1 common:4 ji:2 volume:4 elevated:3 relating:2 mellon:3 refer:1 significant:3 versa:1 phillips:1 rd:3 outlined:1 submodular:8 had:1 marchette:1 base:1 posterior:1 recent:2 dictated:2 optimizing:1 belongs:1 verlag:1 harvey:1 binary:8 wv:1 durand:1 life:1 yi:2 accomplished:1 der:4 preserving:1 minimum:4 additional:1 relaxed:1 determine:1 algebraically:1 signal:12 ii:1 rv:1 smooth:1 technical:1 match:2 bach:1 mle:1 e1:2 ravikumar:1 laplacian:1 controlled:1 anomalous:7 scalable:1 mrf:6 vision:1 cmu:2 expectation:1 metric:3 arxiv:10 kernel:6 agarwal:1 addition:3 fellowship:1 addressed:1 source:5 leaving:1 asz:13 enron:1 subject:1 tend:2 undirected:1 balakrishnan:1 flow:3 lafferty:2 seem:1 call:2 structural:1 near:7 ideal:3 concerned:1 variety:1 fit:1 zi:3 topology:3 regarding:2 shift:1 akin:1 suffer:1 resistance:19 sontag:1 proceed:2 speaking:2 generally:2 useful:1 detailed:1 nonparametric:2 prepared:1 stein:1 extensively:1 broutin:1 annalen:1 zj:2 nsf:2 notice:2 tutorial:1 delta:1 carnegie:3 discrete:3 express:1 group:1 threshold:2 drawn:4 tenth:1 diffusion:2 kenneth:1 imaging:1 graph:105 relaxation:12 merely:1 asymptotically:10 sum:1 luxburg:1 powerful:1 baraniuk:1 topologically:1 throughout:2 reasonable:1 reader:1 wu:2 family:2 chih:1 sobolev:1 draw:1 appendix:1 bound:9 distinguish:5 neill:1 quadratic:1 g:17 activity:15 oracle:2 strength:1 constraint:4 precisely:1 helgason:1 argument:1 min:6 optimality:2 performing:1 department:3 structured:9 according:2 fung:1 representable:1 cormen:1 smaller:3 lp:1 aufl:1 making:2 castro:3 outbreak:2 dv:2 computationally:8 neyman:1 previously:3 dmax:4 turn:4 describing:1 mechanism:1 mind:2 know:2 serf:1 end:2 cor:1 apply:1 observe:1 away:1 spectral:4 radl:1 nicholas:1 distinguished:1 alternative:5 rp:2 knapsack:2 existence:1 thomas:1 assumes:2 running:1 dirichlet:1 include:1 zeitouni:1 ramin:1 malware:1 restrictive:1 k1:3 establish:1 hypercube:1 objective:4 ingster:2 occurs:1 concentration:2 usual:2 diagonal:1 osung:1 minx:1 lends:1 amongst:1 distance:1 separate:1 thank:1 majority:1 water:1 reason:1 spanning:9 devroye:1 modeled:1 nearoptimally:1 relationship:1 ratio:14 vladimir:1 setup:2 statement:2 relate:1 gk:2 negative:2 priebe:1 zabin:1 xk0:1 unknown:1 perform:1 observation:2 markov:9 situation:2 extended:8 peres:1 precise:2 steiglitz:1 arbitrary:1 david:2 pair:2 namely:1 required:1 connection:3 z1:2 auf:1 conroy:1 established:3 distinguishability:2 address:1 beyond:1 able:1 trans:1 pattern:4 regime:3 gef:1 challenge:1 summarize:2 encompasses:1 program:22 green:1 max:12 video:1 power:2 critical:1 difficulty:3 business:2 natural:2 solvable:2 zhu:1 minimax:1 misleading:1 ready:1 ss:1 transitive:6 naive:2 faced:2 nice:1 literature:1 prior:1 geometric:4 berry:1 multiplication:1 determining:2 relative:1 asymptotic:2 law:2 loss:1 afosr:1 highlight:2 proportional:1 versus:1 lv:2 degree:6 sufficient:1 consistent:1 principle:4 thresholding:2 foster:4 balancing:1 anniversary:1 elsewhere:1 supported:2 wireless:1 infeasible:2 addario:1 allow:1 wide:1 akshaykr:1 neighbor:4 akshay:2 comprehensively:1 characterizing:2 taking:1 absolute:1 sparse:1 deepak:1 boundary:3 dimension:1 distributed:2 world:1 unweighted:1 maze:1 author:2 made:2 far:1 social:3 transaction:2 approximate:2 gene:1 pseudoinverse:1 active:1 instantiation:1 pittsburgh:3 knew:1 xi:4 discovery:2 continuous:1 reality:1 impedance:1 channel:1 nature:2 robust:1 warranted:1 poly:2 domain:3 main:3 noise:8 alarm:2 succinct:1 xu:1 roc:1 deployed:1 christos:1 sub:1 position:1 torus:3 exponential:1 bei:1 wavelet:7 theorem:17 specific:6 substance:1 faint:1 admits:1 x:1 evidence:2 derives:2 intractable:9 exists:1 incorporating:1 false:4 essential:1 aria:3 dtic:1 gap:1 logarithmic:1 distinguishable:1 penrose:1 conveniently:1 rinaldo:1 hitting:1 ordered:1 v6:1 scalar:1 springer:1 environmental:1 satisfies:2 acm:4 prop:1 dudoit:1 donoho:1 jeff:1 replace:1 man:1 feasible:2 change:1 hard:2 included:1 specifically:3 except:1 uniformly:3 lemma:1 called:5 total:2 support:1 scan:30 arises:1 phenomenon:2 preparation:1 evaluate:1 princeton:1 mcgregor:1 ex:2 |
4,594 | 5,157 | Analyzing the Harmonic Structure
in Graph-Based Learning
Xiao-Ming Wu1 , Zhenguo Li3 , and Shih-Fu Chang1,2
1
Department of Electrical Engineering, Columbia University
2
Department of Computer Science, Columbia University
3
Huawei Noah?s Ark Lab, Hong Kong
{xmwu, sfchang}@ee.columbia.edu, [email protected]
Abstract
We find that various well-known graph-based models exhibit a common important
harmonic structure in its target function ? the value of a vertex is approximately
the weighted average of the values of its adjacent neighbors. Understanding of
such structure and analysis of the loss defined over such structure help reveal important properties of the target function over a graph. In this paper, we show that
the variation of the target function across a cut can be upper and lower bounded by
the ratio of its harmonic loss and the cut cost. We use this to develop an analytical
tool and analyze five popular graph-based models: absorbing random walks, partially absorbing random walks, hitting times, pseudo-inverse of the graph Laplacian, and eigenvectors of the Laplacian matrices. Our analysis sheds new insights
into several open questions related to these models, and provides theoretical justifications and guidelines for their practical use. Simulations on synthetic and real
datasets confirm the potential of the proposed theory and tool.
1 Introduction
Various graph-based models, regardless of application, aim to learn a target function on graphs that
well respects the graph topology. This has been done under different motivations such as Laplacian
regularization [4, 5, 6, 14, 24, 25, 26], random walks [17, 19, 23, 26], hitting and commute times
[10], p-resistance distances [1], pseudo-inverse of the graph Laplacian [10], eigenvectors of the
Laplacian matrices [18, 20], diffusion maps [8], to name a few. Whether these models can capture
the graph structure faithfully, or whether their target functions possess desirable properties over
the graph, remain unclear. Understanding of such issues can be of great value in practice and has
attracted much attention recently [16, 22, 23].
Several important observations about learning on graphs have been reported. Nadler et al. [16]
showed that the target functions of Laplacian regularized methods become flat as the number of
unlabeled points increases, but they also observed that a good classification can still be obtained
if an appropriate threshold is used. An explanation to this would be interesting. Von Luxburg
et al. [22] proved that commute and hitting times are dominated by the local structures in large
graphs, ignoring the global patterns. Does this mean these metrics are flawed? Interestingly, despite
this finding, the pseudo-inverse of graph Laplacian, known as the kernel matrix of commute times,
consistently performs superior in collaborative filtering [10]. In spectral clustering, the eigenvectors
of the normalized graph Laplacian are more desired than those of the un-normalized one [20, 21].
Also for the recently proposed partially absorbing random walks [23], certain setting of absorption
rates seems better than others. While these issues arise from seemingly unrelated contexts, we will
show in this paper that they can be addressed in a single framework.
1
Our starting point is the discovery of a common structure hidden in the target functions of various
graph models. That is, the value of a vertex is approximately the weighted average of the values
of its adjacent neighbors. We call this structure the harmonic structure for its resemblance to the
harmonic function [9, 26]. It naturally arises from the first step analysis of random walk models,
and, as will be shown in this paper, implicitly exists in other methods such as pseudo-inverse of the
graph Laplacian and eigenvectors of the Laplacian matrices. The target functions of these models
are characterized by their harmonic loss, a quantitative notion introduced in this paper to measure
the discrepancy of a target function f on cuts of graphs. The variations of f across cuts can then be
upper and lower bounded by the ratio of its harmonic loss and the cut cost. As long as the harmonic
loss varies slowly, the graph conductance dominates the variations of f ? it will remain smooth in
a dense area but vary sharply otherwise. Models possessing such properties successfully capture
the cluster structures, and as shown in Sec. 4, lead to superior performance in practical applications
including classification and retrieval.
This novel perspective allows us to give a unified treatment of graph-based models. We use this tool
to study five popular models: absorbing random walks, partially absorbing random walks, hitting
times, pseudo-inverse of the graph Laplacian, and eigenvectors of the Laplacian matrices. Our
analysis provides new theoretical understandings into these models, answers related open questions,
and helps to correct and justify their practical use. The key message conveyed in our results is that
various existing models enjoying the harmonic structure are actually capable of capturing the global
graph topology, and understanding of this structure can guide us in applying them properly.
2 Analysis
Let us first define some notations. In this paper, we consider graphs which are connected, undirected,
weighted, and without self-loops. Denote by G = (V, W ) a graph with n vertices
P V and a symmetric
non-negative affinity matrix W = [wij ] ? Rn?n (wii = 0). Denote by di = j wij the degree of
vertex i, by D = diag(d1 , d2 , . . . , dn ) the degree matrix, and by L = D ? W the graph Laplacian
?
w(S,S)
[7]. The conductance of a subset S ? V of vertices is defined as ?(S) = min(d(S),d(
? , where
S))
P
P
?
?
w(S, S) = i?S,j?S? wij is the cut cost between S and its complement S, and d(S) = i?S di is
the volume of S. For any i ?
/ S, denote by i ? S if there is an edge between vertex i and the set S.
Definition 2.1 (Harmonic loss). The harmonic loss of f : V ? R on any S ? V is defined as:
?
?
?
?
X
X wij
X
X
?di f (i) ?
Lf (S) :=
di ?f (i) ?
f (j)? =
wij f (j)? .
(1)
d
i
j?i
j?i
i?S
i?S
P
Note that Lf (S) = i?S (Lf )(i). By definition, the harmonic loss can be negative. However, as
we shall see below, it is always non-negative on superlevel sets.
The following lemma shows that the harmonic loss couples the cut cost and the discrepancy of the
function across the cut. This observation will serve as the foundation of our analysis in this paper.
P
Lemma 2.2. Lf (S) = i?S,j?S? wij (f (i) ? f (j)). In particular, Lf (V) = 0.
In practice, to examine the variation of f on a graph, one does not necessarily examine on every
subset of vertices, which will be exponential in the number of vertices. Instead, it suffices to consider
its variation on the superlevel sets defined as follows.
Definition 2.3 (Superlevel set). For any function f : V ? R on a graph and a scalar c ? R, the
set {i | f (i) ? c} is called a superlevel set of f with level c.
W.l.o.g., we assume the vertices are sorted such that f (1) ? f (2) ? ? ? ? ? f (n ? 1) ? f (n). The
subset Si := {1, . . . , i} is the superlevel set with level f (i) if f (i) > f (i + 1). For convenience, we
still call Si a superlevel set of f even if f (i) = f (i + 1). In this paper, we will mainly examine the
variation of f on its n superlevel sets S1 , . . . , Sn . Our first observation is that the harmonic loss on
each superlevel set is non-negative, stated as follows.
Lemma 2.4. Lf (Si ) ? 0, i = 1, . . . , n.
2
Based on the notion of superlevel sets, it becomes legitimate to talk about the continuity of a function
on graphs, which we formally define as follows.
Definition 2.5 (Continuity). For any function f : V ? R, we call it left-continuous if i ? Si?1 ,
i = 2, . . . , n; we call it right-continuous if i ? S?i , i = 1, . . . , n ? 1; we call it continuous if
i ? Si?1 and i ? S?i , i = 2, . . . , n ? 1. Particularly, f is called left-continuous, right-continuous,
or continuous at vertex i if i ? Si?1 , i ? S?i , or i ? Si?1 and i ? S?i , respectively.
Proposition 2.6. For any function f : V ? R and any vertex 1 < i < n, 1) if Lf (i) < 0, then
i ? Si?1 , i.e., f is left-continuous at i; 2) if Lf (i) > 0, then i ? S?i , i.e., f is right-continuous at i;
3) if Lf (i) = 0 and f (i ? 1) > f (i) > f (i + 1), then i ? Si?1 and i ? S?i , i.e., f is continuous at i.
The variation of f can be characterized by the following upper and lower bounds.
Theorem 2.7 (Dropping upper bound). For i = 1, . . . , n ? 1,
Lf (Si )
Lf (Si )
f (i) ? f (i + 1) ?
=
.
w(Si , S?i )
?(Si ) min(d(Si ), d(S?i ))
Theorem 2.8 (Dropping lower bound). For i = 1, . . . , n ? 1,
f (u) ? f (v) ?
where u := arg
max
j?Si ,j?S?i
Lf (Si )
Lf (Si )
=
,
w(Si , S?i )
?(Si ) min(d(Si ), d(S?i ))
f (j) and v := arg
min
j?S?i ,j?Si
(2)
(3)
f (j).
The key observations are two-fold. First, for any function f on a graph, as long as its harmonic
loss Lf (Si ) varies slowly on the superlevel sets, i.e., f is harmonic almost everywhere, the graph
conductance ?(Si ) will dominate the variation of f . In particular, by Theorem 2.7, f (i + 1) drops
little if ?(Si ) is large, whereas by Theorem 2.8, a big gap exists across the cut if ?(Si ) is small (see
Sec. 3.1 for illustration). Second, the continuity (either left, right, or both) of f ensures that its variations conform with the graph connectivity, i.e., points with similar values on f tend to be connected.
It is a desired property because a ?discontinuous? function that changes alternatively among different clusters can hardly describe the graph. These observations can guide us in identifying ?good?
functions that encode the global structure of graphs, as will be shown in the next section.
3 Examples
With the tool developed in Sec. 2, in this section, we study five popular graph models arising from
different contexts including SSL, retrieval, recommendation, and clustering. For each model, we
show its target function in harmonic forms, quantify its harmonic loss, analyze its dropping bounds,
and provide corrections or justifications for its use.
3.1 Absorbing Random Walks
The first model we examine is the seminal Laplacian regularization method [26] proposed for SSL.
While it has a nice interpretation in terms of absorbing random walks, with the labeled points being
absorbing states, it was argued in [16] that this method might be ill-posed for large unlabeled data
in high dimension (? 2) because the target function is extremely flat and thus seems problematic
for classification. [1] further connected this argument with the resistance distance on graphs, pointing out that the classification biases to the labeled points with larger degrees. Here we show that
Laplacian regularization can actually capture the global graph structure and a simple normalization
scheme would resolve the raised issue.
For simplicity, we consider the binary classification setting with one label in each class. Denote by
f : V ? R the absorption probability vector from every point to the positive labeled point. Assume
the vertices are sorted such that 1 = f (1) > f (2) ? ? ? ? ? f (n ? 1) > f (n) = 0 (vertex 1 is labeled
positive and vertex n is labeled negative). By the first step analysis of the random walk,
X wik
f (i) =
f (k), for i = 2, . . . , n ? 1.
(4)
di
k?i
Our first observation is that the harmonic loss of f is constant w.r.t. Si , as shown below.
3
f (2) = 0.97
1
2
1
S2
1
f (1) = 1
3
f (3) = 0.94
1
S3
6
0.1
1
1
4
f (4) = 0.06
f (6) = 0
1
5 f (5) = 0.03
Figure 1: Absorbing random walks on a 6-point graph.
Corollary 3.1. Lf (Si ) =
P
k?1
w1k (1 ? f (k)), i = 1, . . . , n ? 1.
The following statement shows that f changes continuously on graphs under general condition.
Corollary 3.2. Suppose f is mutually different on unlabeled data. Then f is continuous.
Since the harmonic loss of f is a constant on the superlevel sets Si (Corollary 3.1), by Theorems
2.7 and 2.8, the variation of f depends solely on the cut value w(Si , S?i ), which indicates that it will
drop slowly when the cut is dense but drastically when the cut is sparse. Also by Corollary 3.2, f is
continuous. Therefore, we conclude that f is a good function on graphs.
This can be illustrated by a toy example in Fig. 1, where the graph consists of 6 points in 2 classes
denoted by different colors, with 3 points in each. The edge weights are all 1 except for the edge
between the two cluster, which is 0.1. Vertices 1 and 6 (black edged) are labeled. The absorption
probabilities from all the vertices to vertex 1 are computed and shown. We can see that since the
cut w(S2 , S?2 ) = 2 is quite dense, the drop between f (2) and f (3) is upper bounded by a small
number (Theorem 2.7), so f (3) must be very close to f (2), as observed. In contrast, since the cut
w(S3 , S?3 ) = 0.1 is very weak, Theorem 2.8 guarantees that there will be a huge gap between f (3)
and f (4), as also verified. The bound in Theorem 2.8 is now tight as there is only 1 edge in the cut.
Now let f1 and f2 denote the absorption probability vectors to the two labeled points respectively.
To classify an unlabeled point i, the usual way is to compare f1 (i) and f2 (i), which is equivalent to
setting the threshold as 0 in f0 = f1 ? f2 . It was observed in [16] that although f0 can be extremely
flat in the presence of large unlabeled data in high dimension, setting the ?right? threshold can
produce sensible results. Our analysis explains this ? it is because both f1 and f2 are informative of
the cluster structures. Our key argument is that Laplacian regularization actually carries sufficient
information about the graph structure, but how to exploit it can really make a difference.
?3
2.5
6
2.5
1
2
2
1.5
1.5
1
1
0.5
x 10
2.5
2
1.5
1
0.5
0.5
0
0.5
4
0
0
?0.5
?0.5
?0.5
?1
?1
?1
?1.5
?1.5
?2
?1.5
?2
?2.5
?2
?1
0
(a)
1
2
0
0
?2
?2.5
300
(b)
600
?2
?1
0
(c)
1
2
2
0
300
600
?2.5
(d)
?2
?1
0
1
2
(e)
Figure 2: (a) Two 20-dimensional Gaussians with the first two dimensions plotted. The magenta
triangle and the green circle denote labeled data. The blue cross denotes a starting vertex indexed
by i for later use. (b) Absorption probabilities to the two labeled points. (c) Classification by
comparing the absorption probabilities. (d) Normalized absorption probabilities. (e) Classification
by comparing the normalized absorption probabilities.
We illustrate this point by using a mixture of two 20-dimensional Gaussians of 600 points, with one
label in each Gaussian (Fig. 2(a)). The absorption probabilities to both labeled points are shown in
Fig. 2(b), in magenta and green respectively. The green vector is well above the the magenta vector,
indicating that every unlabeled point has larger absorption probability to the green labeled point.
Comparing them classifies all the unlabeled points to the green Gaussian (Fig. 2(c)). Since the green
labeled point has larger degree than the magenta one1 , this result is expected from the analysis in
[1]. However, the probability vectors are informative, with a clear gap between the clusters in each
1
The degrees are 1.4405 and 0.1435. We use a weighted 20-NN graph (see Supplement).
4
vector. To useP
the information, we propose to normalize each vector by its probability mass, i.e.,
f ? (i) = f (i)/ j f (j) (Fig. 2(d)). Comparing them leads to a perfect classification (Fig. 2(e)).
This idea is based on two observations from our analysis: 1) the variance of the probabilities within
each cluster is small; 2) there is a gap between the clusters. The small variance indicates that
comparing the probabilities is essentially the same as comparing their means within clusters. The
gap between the clusters ensures that the normalization makes the vectors align well (this point is
made precise in Supplement). Our above analysis applies to multi-class problems and allows more
than one labeled points in one class. In this general case, the classification rule is as follows: 1)
compute the absorption probability vector fi : U ? R for each labeled point i by taking all other
labeled points as negative, where U denotes the set of unlabeled points; 2) normalize fi by its mass,
denoted by fi? ; 3) assign each unlabeled point j to the class of j ? := arg maxi {fi? (j)}. We denote
this algorithm as ARW-N-1NN.
3.2 Partially Absorbing Random Walks
Here we revisit the recently proposed partially absorbing random walks (PARW) [23], which generalizes absorbing random walks by allowing partial absorption at each state. The absorption rate pii
i
at state i is defined as pii = ????
, where ? > 0, ?i > 0 are regularization parameters. Given curi +di
rent state i, a PARW in the next step will get absorbed at i with probability pii and with probability
w
(1 ? pii ) ? diji moves to state j. Let aij be the probability that a PARW starting from state i gets
absorbed at state j within finite steps, and denote by A = [aij ] ? Rn?n the absorption probability
matrix. Then A = (?? + L)?1 ??, where ? = diag(?1 , . . . , ?n ) is the regularization matrix.
PARW is a unified framework with several popular SSL methods and PageRank [17] as its special
cases, corresponding to different ?. Particularly, the case ? = I has been justified in capturing the
cluster structures [23]. In what follows, we extend this result to show that the columns of A obtained
by PARW with almost arbitrary ? (not just ? = I) actually exhibit strong harmonic structures and
should be expected to work equally well.
Our first observation is that while A is not symmetric for arbitrary ?, A??1 = (?? + L)?1 ? is.
?
Lemma 3.3. aij = ?ji aji .
Lemma 3.4. aii is the only largest entry in the i-th column of A, i = 1, . . . , n.
Our second observation is that the harmonic structure exists in the probabilities of PARW from every
vertex getting absorbed at a particular vertex, i.e., in the columns of A. W.l.o.g., consider the first
column of A and denote it by p. Assume that the vertices are sorted such that p(1) > p(2) ? ? ? ? ?
p(n ? 1) ? p(n), where p(1) > p(2) is due to Lemma 3.4. By the first step analysis of PARW, we
can write p in a recursive form:
X w1k
X wik
??1
+
p(k), p(i) =
p(k), i = 2, . . . , n,
(5)
p(1) =
d1 + ??1
d1 + ??1
di + ??i
k?1
k?i
which is equivalent to the following harmonic form:
X w1k
X wik
??1
??i
p(1) =
(1 ? p(1)) +
p(k), p(i) = ?
p(i) +
p(k), i = 2, . . . , n. (6)
d1
d1
di
di
k?1
k?i
The harmonic loss of p can be computed from Eq. (6).
P
P
Corollary 3.5. Lp (Si ) = ??1 (1 ? k?Si a1k ) = ??1 k?S?i a1k , i = 1, . . . , n ? 1.
Corollary 3.6. p is left-continuous.
P
P
Now we are ready to examine the variation of p. Note that k a1k = 1 and a1k ? ?k / i ?i
as ? ? 0 [23]. By Theorem 2.7, the drop of p(i) is upper bounded by ??1 /w(Si , S?i ), which is
small when the cut w(Si , S?i )Pis dense and ?Pis small. Now let k be the largest number such that
d(Sk ) ? 12 d(V), and assume i?S?k ?i ? 21 i ?i . By Theorem 2.8, for 1 ? i ? k, the drop of p(i)
across the cut {Si , S?i } is lower bounded by 13 ??1 /w(Si , S?i ), if ? is sufficiently small. This shows
that p(i) will drop a lot when the cut w(Si , S?i ) is weak. The comparison between the corresponding
row and column of A is shown in Figs. 3(a?b)2 , which confirms our analysis.
2
?i ?s are sampled from the uniform distribution on the interval [0, 1] and ? = 1e ? 6, as used in Sec. 4.
5
?3
4
x 10
4
?3
3.418
x 10
2
1
x 10
1500
1400
0.5
2
1
3.417
1300
0
0
0
300
3.416
0
600
300
(a)
600
(b)
0.2
0
0
?0.5
?0.1
300
?1
0
600
(f) ?u = 0.0144
300
300
600
0
0
300
(c)
0.5
0.1
?0.2
0
?0.5
0
600
(g) ?u = 0.0172
600
1200
0
(d)
0.3
0.1
0.2
0.2
0.05
0.1
0.1
0
0
0
?0.05
?0.1
?0.1
0
300
600
?0.1
0
(h) ? = 0.0304
300
300
600
(e)
600
(i) ?v = 0.0304
?0.2
0
300
600
(j) ?v = 0.3845
Figure 3: (a) Absorption probabilities that a PAWR gets absorbed at other points when starting from
i (see Fig. 2). (b) Absorption probabilities that PAWR gets absorbed at i when starting from other
points. (c) The i-th row of L? . (d) Hitting times from i to hit other points. (e) Hitting times from
other points to hit i. (f) and (g) Eigenvectors of L (mini {di } = 0.0173). (h) An eigenvector of
Lsym . (i) and (j) Eigenvectors of Lrw . The values in (f?j) denote eigenvalues.
It is worth mentioning that our analysis substantially extends the results in [23] by showing that the
setting of ? is not really necessary ? a random ? can perform equally well if using the columns
instead of the rows of A. In addition, our result includes the seminal local clustering model [2] as a
special case, which corresponds to ? = D in our analysis.
3.3 Pseudo-inverse of the Graph Laplacian
The pseudo-inverse L? of the graph Laplacian is a valid kernel corresponding to commute times
[10, 12]. While commute times may fail to capture the global topology in large graphs [22], L? , if
used directly as a similarity measure, gives superior performance in practice [10]. Here we provide
a formal analysis and justification for L? by revealing the strong harmonic structure hidden in it.
Lemma 3.7. (L? L)ij = ? n1 , i 6= j; and (L? L)ii = 1 ? n1 .
Note that L? is symmetric since L is symmetric. W.l.o.g., we consider the first row of L? and denote
it by ?. The following lemma shows the harmonic form of ?.
Lemma 3.8. ? has the following harmonic form:
?(1) =
1
X w1k
X wik
1 ? n1
+
?(k), ?(i) = ? n +
?(k), i = 2, . . . , n.
d1
d1
di
di
k?1
(7)
k?i
W.l.o.g., assume the vertices have been sorted such that ?(1) > ?(2) ? ? ? ? ? ?(n ? 1) ? ?(n)3 .
Then the harmonic loss of ? on the set Si admits a very simple form, as shown below.
?
Corollary 3.9. L? (Si ) = |Sni | , i = 1, . . . , n ? 1.
Corollary 3.10. ? is left-continuous.
By Corollary 3.9, L? (Si ) < 1 and decreases very slowly in large graphs since L? (Si ) ? L? (Si+1 ) =
1
n for any i. From the analysis in Sec. 2, we can immediately conclude that the variation of ?(i) is
dominated by the cut cost on the superlevel set Si . Fig. 3(c) illustrates this argument.
3.4 Hitting Times
The hitting time hij from vertex i to j is the expected number of steps it takes a random walk starting
from i to reach j for the first time. While it was proven in [22] that hitting times are dominated by
the local structure of the target, we show below that the hitting times from other points to the same
target admit a harmonic structure, and thus are still able to capture the global structure of graphs.
Our result is complementary to the analysis in [22], and provides a justification of using hitting times
in information retrieval where the query is taken as the target to be hit by others [15].
3
?(1) > ?(2) since one can show that any diagonal entry in L? is the only largest in the corresponding row.
6
Let h : V ? R be the hitting times from every vertex to a particular vertex. W.l.o.g., assume the
vertices have been sorted such that h(1) ? h(2) ? ? ? ? ? h(n ? 1) > h(n) = 0, where vertex n is
the target vertex. Applying the first step analysis, we obtain the harmonic form of h:
X wik
h(i) = 1 +
h(k), for i = 1, . . . , n ? 1.
(8)
di
k?i
The harmonic loss on the set Si turns out to be the volume of the set, as stated below.
X
Corollary 3.11. Lh (Si ) =
dk , i = 1, . . . , n ? 1.
1?k?i
Corollary 3.12. h is right-continuous.
Now let us examine the variation of h across any cut {Si , S?i }. Note that
?i
d(Si )
Lh (Si )
=
.
, where ?i =
?(Si )
w(Si , S?i )
min(d(Si ), d(S?i ))
(9)
First, by Theorem 2.8, there could be a significant gap between the target and its neighbors, since
1
?n?1 = d(V)
dn ? 1 could be quite large. As i decreases from d(Si ) > 2 d(V), the variation of ?i
1
becomes slower and slower (?i = 1 when d(Si ) ? 2 d(V)), so the variation of h will depend on the
variation of the conductance of Si , i.e., ?(Si ), according to Theorems 2.7 and 2.8. Fig. 3(e) shows
that h is flat within the clusters, but there is a large gap presented between them. In contrast, there
are no gaps exhibited in the hitting times from the target to other vertices (Fig. 3(d)).
3.5 Eigenvectors of the Laplacian Matrices
The eigenvectors of the Laplacian matrices play a key role in graph partitioning [20]. In practice, the
eigenvectors with smaller (positive) eigenvalues are more desired than those with larger eigenvalues,
and the ones from a normalized Laplacian are preferred than those from the un-normalized one.
These choices are usually justified from the relaxation of the normalized cuts [18] and ratio cuts
[11]. However, it has been known that these relaxations can be arbitrarily loose [20]. It seems more
interesting if one can draw conclusions by analyzing the eigenvectors directly. Here we address
these issues by examining the harmonic structures in these eigenvectors.
We follow the notations in [20] to denote two normalized graph Laplacians: Lrw := D?1 L and
1
1
Lsym := D? 2 LD? 2 . Denote by u and v two eigenvectors of L and Lrw with eigenvalues ?u > 0
and ?v > 0, respectively, i.e., Lu = ?u u and Lrw v = ?v v. Then we have
X
X wik
wik
u(k), v(i) =
v(k), for i = 1, . . . , n.
(10)
u(i) =
di ? ?u
di (1 ? ?v )
k?i
k?i
We can see that the smaller ?u and ?v , the stronger the harmonic structures of u and v. This explains
why in practice the eigenvector with the second4 smallest eigenvalues gives superior performance.
As long as ?u ? mini {di }, we are safe to say that u will have a significant harmonic structure, and
thus will be informative for clustering. However, if ?u is close to mini {di }, no matter how small ?u
is, the harmonic structure of u will be weaker, and thus u is less useful. In contrast, from Eq. (10),
v will always enjoy a significant harmonic structure as long as ?v is much smaller than 1. This
explains why eigenvectors of Lrw are preferred than those of L for clustering. These arguments are
validated in Figs. 3(f?j), where we also include an eigenvector of Lsym for comparison.
4 Experiments
In the first experiment5, we test absorbing random walks (ARW) for SSL, with the class mass normalization suggested in [26] (ARW-CMN), our proposed normalization (ARW-N-1NN, Sec. 3.1),
and without any normalization (ARW-1NN) ? where each unlabeled instance is assigned the class of
the labeled instance at which it most likely gets absorbed. We also compare with the local and global
4
5
Note that the smallest one is zero in either L or Lrw .
Please see Supplement for parameter settings, data description, graph construction, and experimental setup.
7
Table 1: Classification accuracy on 9 datasets.
ARW-N-1NN
ARW-1NN
ARW-CMN
LGC
PARW (? = I)
USPS
.879
.445
.775
.821
.880
YaleB
.892
.733
.847
.884
.906
satimage
.777
.650
.741
.725
.781
imageseg
.673
.595
.624
.638
.665
ionosphere
.771
.699
.724
.731
.752
iris
.918
.902
.894
.903
.928
protein
.589
.440
.511
.477
.572
spiral
.830
.754
.726
.729
.835
soybean
.916
.889
.856
.816
.905
consistency (LGC) method [24] and the PARW with ? = I in [23]. The results are summarized in
Table 1. We can see that ARW-N-1NN and PARW (? = I) consistently perform the best, which
verifies our analysis in Sec. 3. The results of ARW-1NN are unsatisfactory due to its bias to the
labeled instance with the largest degree [1]. Although ARW-CMN does improve over ARW-1NN in
many cases, it does not perform as well as ARW-N-1NN, mainly because of the artifacts induced by
estimating the class proportion from limited labeled data. The results of LGC are not comparable to
ARW-N-1NN and PARW (? = I), which is probably due to the lack of a harmonic structure.
Table 2: Ranking results (MAP) on USPS.
Digits
? = R (column)
? = R (row)
?=I
0
.981
.169
.981
1
.988
.143
.988
2
.875
.114
.876
3
.892
.096
.893
4
.647
.092
.646
5
.780
.076
.778
6
.941
.093
.940
7
.918
.093
.919
8
.746
.075
.746
9
.731
.086
.730
All
.850
.103
.850
In the second experiment, we test PARW on a retrieval task on USPS (see Supplement). We compare
the cases with ? = I and ? = R, where R is a random diagonal matrix with positive diagonal
entries. For ? = R, we also compare the uses of columns and rows for retrieval. The results are
shown in Table 2. We observe that the columns in ? = R give significantly better results compared
with rows, implying that the harmonic structure is vital to the performance. ? = R (column) and
? = I perform very similarly. This suggests that it is not the special setting of absorbing rates but
the harmonic structure that determines the overall performance.
Table 3: Classification accuracy on USPS.
k-NN unweighted graphs
HT(L ? U)
HT(U ? L)
L?
10
.8514
.1518
.8512
20
.8361
.1454
.8359
50
.7822
.1372
.7816
100
.7500
.1209
.7493
200
.7071
.1131
.7062
500
.6429
.1113
.6426
In the third experiment, we test hitting times and pseudo-inverse of the graph Laplacian for SSL on
USPS. We compare two different uses of hitting times, the case of starting from the labeled data L
to hit the unlabeled data U (HT(L ? U)), and the case of the opposite direction (HT(U ? L)).
Each unlabeled instance j is assigned the class of labeled instance j ? , where j ? = arg mini?L {hij }
in HT(L ? U), j ? = arg mini?L {hji } in (HT(U ? L)), and j ? = arg maxi?L {?ji } in L? = (?ij ).
The results averaged over 100 trials are shown in Table 3, where we see that HT(L ? U) performs
much better than HT(U ? L), which is expected as the former admits a desired harmonic structure.
Note that HT(L ? U) is not lost as the number of neighbors increases (i.e., the graph becomes
more connected). The slight performance drop is due to the inclusion of more noisy edges. In
contrast, HT(U ? L) is completely lost [20]. We also observe that L? produces very competitive
performance, which again supports our analysis.
5 Conclusion
In this paper, we explore the harmonic structure that widely exists in graph models. Different
from previous research [3, 13] of harmonic analysis on graphs, where the selection of canonical
basis on graphs and the asymptotic convergence on manifolds are studied, here we examine how
functions on graphs deviate from being harmonic and develop bounds to analyze their theoretical
behavior. The proposed harmonic loss quantifies the discrepancy of a function across cuts, allows a
unified treatment of various models from different contexts, and makes them easy to analyze. Due
to its resemblance with standard mathematical concepts such as divergence and total variation, an
interesting line of future work is to make their connections clear. Other future works include deriving
more rigorous bounds for certain functions and extending our analysis to more graph models.
8
References
[1] M. Alamgir and U. von Luxburg. Phase transition in the family of p-resistances. In NIPS.
2011.
[2] R. Andersen, F. Chung, and K. Lang. Local graph partitioning using pagerank vectors. In
FOCS, pages 475?486, 2006.
[3] M. Belkin. Problems of Learning on Manifolds. PhD thesis, The University of Chicago, 2003.
[4] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large
graphs. In COLT, pages 624?638, 2004.
[5] M. Belkin, Q. Que, Y. Wang, and X. Zhou. Toward understanding complex spaces: Graph
laplacians on manifolds with singularities and boundaries. In COLT, 2012.
[6] O. Bousquet, O. Chapelle, and M. Hein. Measure based regularization. In NIPS, 2003.
[7] F. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[8] R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis,
21(1):5?30, 2006.
[9] P. G. Doyle and J. L. Snell. Random walks and electric networks. Mathematical Association
of America, 1984.
[10] F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens. Random-walk computation of similarities
between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering, 19(3):355?369, 2007.
[11] L. Hagen and A. B. Kahng. New spectral methods for ratio cut partitioning and clustering.
IEEE transactions on Computer-aided design of integrated circuits and systems, 11(9):1074?
1085, 1992.
[12] D. J. Klein and M. Randi?c. Resistance distance. Journal of Mathematical Chemistry, 12(1):81?
95, 1993.
[13] S. S. Lafon. Diffusion maps and geometric harmonics. PhD thesis, Yale University, 2004.
[14] M. H. G. Lever and M. Herbster. Predicting the labelling of a graph via minimum p-seminorm
interpolation. In COLT, 2009.
[15] Q. Mei, D. Zhou, and K. Church. Query suggestion using hitting time. In CIKM, pages 469?
478, 2008.
[16] B. Nadler, N. Srebro, and X. Zhou. Statistical analysis of semi-supervised learning: The limit
of infinite unlabelled data. In NIPS, pages 1330?1338, 2009.
[17] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order
to the web. 1999.
[18] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 22(8):888?
905, 2000.
[19] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random walks. In
NIPS, pages 945?952, 2002.
[20] U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416,
2007.
[21] U. Von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. The Annals
of Statistics, pages 555?586, 2008.
[22] U. Von Luxburg, A. Radl, and M. Hein. Hitting and commute times in large graphs are often
misleading. Arxiv preprint arXiv:1003.1266, 2010.
[23] X.-M. Wu, Z. Li, A. M.-C. So, J. Wright, and S.-F. Chang. Learning with partially absorbing
random walks. In NIPS, 2012.
[24] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global
consistency. In NIPS, 2004.
[25] X. Zhou and M. Belkin. Semi-supervised learning by higher order regularization. In AISTATS,
2011.
[26] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In ICML, 2003.
9
| 5157 |@word kong:1 trial:1 stronger:1 proportion:1 seems:3 open:2 d2:1 confirms:1 simulation:1 commute:6 ld:1 carry:1 interestingly:1 existing:1 com:1 comparing:6 si:54 lang:1 attracted:1 must:1 chicago:1 informative:3 drop:7 a1k:4 implying:1 provides:3 node:1 five:3 mathematical:4 dn:2 become:1 focs:1 consists:1 coifman:1 li3:1 expected:4 behavior:1 examine:7 multi:1 ming:1 resolve:1 little:1 becomes:3 classifies:1 notation:2 bounded:5 unrelated:1 mass:3 estimating:1 circuit:1 what:1 substantially:1 eigenvector:3 developed:1 unified:3 finding:1 sni:1 guarantee:1 pseudo:8 quantitative:1 every:5 shed:1 hit:4 partitioning:3 enjoy:1 positive:4 engineering:2 local:6 w1k:4 limit:1 despite:1 analyzing:2 solely:1 interpolation:1 approximately:2 pami:1 might:1 black:1 studied:1 suggests:1 mentioning:1 limited:1 averaged:1 practical:3 practice:5 recursive:1 lost:2 lf:15 diji:1 digit:1 mei:1 aji:1 area:1 significantly:1 revealing:1 protein:1 get:5 convenience:1 unlabeled:12 close:2 selection:1 context:3 applying:2 seminal:2 equivalent:2 map:4 shi:1 regardless:1 attention:1 starting:7 simplicity:1 identifying:1 immediately:1 legitimate:1 insight:1 rule:1 dominate:1 deriving:1 notion:2 variation:17 justification:4 alamgir:1 annals:1 target:17 suppose:1 play:1 construction:1 us:2 matveeva:1 particularly:2 hagen:1 ark:1 cut:25 labeled:21 winograd:1 observed:3 role:1 preprint:1 electrical:1 capture:5 wang:1 ensures:2 connected:4 decrease:2 depend:1 tight:1 serve:1 f2:4 usps:5 triangle:1 completely:1 basis:1 aii:1 various:5 america:1 talk:1 describe:1 query:2 que:1 quite:2 posed:1 larger:4 widely:1 say:1 otherwise:1 niyogi:1 statistic:2 noisy:1 seemingly:1 eigenvalue:5 analytical:1 propose:1 loop:1 description:1 normalize:2 olkopf:1 getting:1 convergence:1 cluster:11 motwani:1 extending:1 produce:2 perfect:1 help:2 illustrate:1 develop:2 ij:2 eq:2 strong:2 pii:4 quantify:1 direction:1 safe:1 fouss:1 correct:1 discontinuous:1 brin:1 explains:3 argued:1 assign:1 suffices:1 f1:4 really:2 snell:1 proposition:1 absorption:16 singularity:1 correction:1 sufficiently:1 wright:1 great:1 nadler:2 pointing:1 vary:1 smallest:2 label:2 largest:4 faithfully:1 successfully:1 tool:4 weighted:4 always:2 gaussian:3 aim:1 zhou:5 jaakkola:1 corollary:11 encode:1 validated:1 properly:1 consistently:2 unsatisfactory:1 indicates:2 mainly:2 contrast:4 rigorous:1 huawei:2 nn:12 integrated:1 chang1:1 hidden:2 wij:6 issue:4 classification:12 arg:6 among:1 denoted:2 overall:1 ill:1 colt:3 raised:1 ssl:5 special:3 field:1 flawed:1 icml:1 discrepancy:3 future:2 others:2 few:1 belkin:5 divergence:1 doyle:1 phase:1 n1:3 conductance:4 huge:1 message:1 mixture:1 fu:1 capable:1 edge:5 partial:1 necessary:1 lh:2 indexed:1 enjoying:1 walk:20 desired:4 plotted:1 circle:1 hein:2 theoretical:3 instance:5 classify:1 column:10 cost:5 vertex:29 subset:3 entry:3 uniform:1 examining:1 reported:1 answer:1 varies:2 synthetic:1 herbster:1 continuously:1 connectivity:1 von:5 again:1 andersen:1 thesis:2 lever:1 slowly:4 soybean:1 admit:1 american:1 chung:2 li:2 toy:1 potential:1 chemistry:1 lsym:3 sec:7 summarized:1 includes:1 matter:1 ranking:2 depends:1 later:1 lot:1 lab:1 analyze:4 competitive:1 randi:1 collaborative:2 accuracy:2 variance:2 weak:2 lu:1 worth:1 reach:1 definition:4 naturally:1 di:17 couple:1 sampled:1 proved:1 treatment:2 popular:4 color:1 knowledge:1 segmentation:1 actually:4 lgc:3 higher:1 supervised:4 follow:1 done:1 just:1 web:1 lack:1 continuity:3 artifact:1 reveal:1 resemblance:2 seminorm:1 parw:12 name:1 normalized:9 yaleb:1 concept:1 former:1 regularization:9 assigned:2 symmetric:4 illustrated:1 adjacent:2 self:1 please:1 iris:1 hong:1 performs:2 saerens:1 image:1 harmonic:47 novel:1 recently:3 possessing:1 fi:4 common:2 absorbing:15 superior:4 ji:2 volume:2 extend:1 interpretation:1 slight:1 association:1 significant:3 edged:1 consistency:3 similarly:1 inclusion:1 chapelle:1 f0:2 similarity:2 align:1 showed:1 perspective:1 certain:2 binary:1 arbitrarily:1 minimum:1 ii:1 semi:4 desirable:1 smooth:1 unlabelled:1 characterized:2 cross:1 long:4 retrieval:5 equally:2 laplacian:22 essentially:1 metric:1 arxiv:2 kernel:2 normalization:5 justified:2 whereas:1 addition:1 addressed:1 interval:1 sch:1 posse:1 exhibited:1 probably:1 bringing:1 induced:1 tend:1 undirected:1 lafferty:1 call:5 ee:1 presence:1 vital:1 spiral:1 easy:1 topology:3 wu1:1 opposite:1 idea:1 whether:2 render:1 resistance:4 hardly:1 useful:1 clear:2 eigenvectors:14 problematic:1 canonical:1 s3:2 revisit:1 tutorial:1 cikm:1 arising:1 hji:1 blue:1 klein:1 conform:1 write:1 shall:1 dropping:3 key:4 shih:1 threshold:3 verified:1 diffusion:3 ht:10 graph:65 relaxation:2 luxburg:5 inverse:8 everywhere:1 extends:1 almost:2 family:1 wu:1 draw:1 comparable:1 capturing:2 bound:7 yale:1 fold:1 noah:1 sharply:1 flat:4 dominated:3 arw:14 bousquet:3 argument:4 min:5 extremely:2 department:2 according:1 across:7 remain:2 smaller:3 lp:1 s1:1 taken:1 superlevel:12 mutually:1 turn:1 loose:1 fail:1 generalizes:1 wii:1 gaussians:2 observe:2 radl:1 appropriate:1 spectral:5 slower:2 denotes:2 clustering:8 include:2 exploit:1 ghahramani:1 society:1 move:1 malik:1 question:2 usual:1 diagonal:3 unclear:1 exhibit:2 affinity:1 distance:3 pirotte:1 sensible:1 manifold:3 toward:1 illustration:1 ratio:4 mini:5 setup:1 statement:1 hij:2 negative:6 stated:2 design:1 guideline:1 kahng:1 perform:4 allowing:1 upper:6 observation:9 datasets:2 markov:1 finite:1 precise:1 rn:2 arbitrary:2 introduced:1 complement:1 connection:1 lal:1 nip:6 trans:1 address:1 able:1 suggested:1 below:5 pattern:1 usually:1 laplacians:2 pagerank:3 including:2 max:1 explanation:1 sfchang:1 green:6 regularized:1 predicting:1 zhu:1 wik:7 scheme:1 improve:1 misleading:1 ready:1 church:1 columbia:3 sn:1 deviate:1 nice:1 understanding:5 discovery:1 geometric:1 asymptotic:1 loss:18 interesting:3 suggestion:1 filtering:1 proven:1 srebro:1 foundation:1 degree:6 conveyed:1 sufficient:1 xiao:1 pi:2 row:8 drastically:1 guide:2 bias:2 aij:3 formal:1 weaker:1 neighbor:4 taking:1 zhenguo:2 sparse:1 boundary:1 dimension:3 valid:1 transition:1 unweighted:1 lafon:2 made:1 transaction:2 citation:1 implicitly:1 preferred:2 confirm:1 global:8 conclude:2 alternatively:1 un:2 continuous:14 quantifies:1 sk:1 why:2 table:6 learn:1 ignoring:1 necessarily:1 complex:1 electric:1 diag:2 cmn:3 aistats:1 dense:4 motivation:1 big:1 arise:1 s2:2 verifies:1 complementary:1 fig:12 exponential:1 rent:1 third:1 theorem:12 magenta:4 showing:1 maxi:2 dk:1 admits:2 ionosphere:1 dominates:1 exists:4 supplement:4 phd:2 labelling:1 illustrates:1 gap:8 likely:1 explore:1 absorbed:6 hitting:17 partially:7 scalar:1 recommendation:2 chang:1 applies:1 corresponds:1 determines:1 weston:1 sorted:5 satimage:1 change:2 aided:1 infinite:1 except:1 justify:1 lemma:9 called:2 total:1 experimental:1 lrw:6 indicating:1 formally:1 support:1 szummer:1 arises:1 d1:7 |
4,595 | 5,158 | Learning Gaussian Graphical Models with Observed
or Latent FVSs
Alan S. Willsky
Department of EECS
Massachusetts Institute of Technology
[email protected]
Ying Liu
Department of EECS
Massachusetts Institute of Technology
[email protected]
Abstract
Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely
used in many applications, and the trade-off between the modeling capacity and
the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex
sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles.
Exact inference such as computing the marginal distributions and the partition
function has complexity O(k 2 n) using message-passing algorithms, where k is
the size of the FVS, and n is the total number of nodes. We propose efficient
structure learning algorithms for two cases: 1) All nodes are observed, which is
useful in modeling social or flight networks where the FVS nodes often correspond to a small number of highly influential nodes, or hubs, while the rest of
the networks is modeled by a tree. Regardless of the maximum degree, without
knowing the full graph structure, we can exactly compute the maximum likelihood
estimate with complexity O(kn2 + n2 log n) if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are
latent variables, where structure learning is equivalent to decomposing an inverse
covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning
steps, we can obtain a learning algorithm using alternating low-rank corrections
with complexity O(kn2 + n2 log n) per iteration. We perform experiments using
both synthetic data as well as real data of flight delays to demonstrate the modeling
capacity with FVSs of various sizes.
1
Introduction
In undirected graphical models or Markov random fields, each node represents a random variable
while the set of edges specifies the conditional independencies of the underlying distribution. When
the random variables are jointly Gaussian, the models are called Gaussian graphical models (GGMs)
or Gauss Markov random fields. GGMs, such as linear state space models, Bayesian linear regression models, and thin-membrane/thin-plate models, have been widely used in communication, image processing, medical diagnostics, and gene regulatory networks. In general, a larger family of
graphs represent a larger collection of distributions and thus can better approximate arbitrary empirical distributions. However, many graphs lead to computationally expensive inference and learning
algorithms. Hence, it is important to study the trade-off between modeling capacity and efficiency.
Both inference and learning are efficient for tree-structured graphs (graphs without cycles): inference can be computed exactly in linear time (with respect to the size of the graph) using belief
propagation (BP) [1] while the learning problem can be solved exactly in quadratic time using the
Chow-Liu algorithm [2]. Since trees have limited modeling capacity, many models beyond trees
have been proposed [3, 4, 5, 6]. Thin junction trees (graphs with low tree-width) are extensions of
trees, where inference can be solved efficiently using the junction algorithm [7]. However, learning
1
junction trees with tree-width greater than one is NP-complete [6] and tractable learning algorithms
(e.g. [8]) often have constraints on both the tree-width and the maximum degree. Since graphs with
large-degree nodes are important in modeling applications such as social networks, flight networks,
and robotic localization, we are interested in finding a family of models that allow arbitrarily large
degrees while being tractable for learning.
Beyond thin-junction trees, the family of sparse GGMs is also widely studied [9, 10]. These models
are often estimated using methods such as graphical lasso (or l-1 regularization) [11, 12]. However,
a sparse GGM (e.g. a grid) does not automatically lead to efficient algorithms for exact inference.
Hence, we are interested in finding a family of models that are not only sparse but also have guaranteed efficient inference algorithms.
In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS
is a set of nodes whose removal breaks all cycles [13]. The authors of [14] have demonstrated
that the computation of exact means and variances for such a GGM can be accomplished, using
message-passing algorithms with complexity O(k 2 n), where k is the size of the FVS and n is the
total number of nodes. They have also presented results showing that for models with larger FVSs,
approximate inference (obtained by replacing a full FVS by a pseudo-FVS) can work very well,
with empirical evidence indicating that a pseudo-FVS of size O(log n) gives excellent results. In
Appendix A we will provide some additional analysis of inference for such models (including the
computation of the partition function), but the main focus is maximum likelihood (ML) learning of
models with FVSs of modest size, including identifying the nodes to include in the FVS.
In particular, we investigate two cases. In the first, all of the variables, including any to be included
in the FVS are observed. We provide an algorithm for exact ML estimation that, regardless of the
maximum degree, has complexity O(kn2 + n2 log n) if the FVS nodes are identified in advance and
polynomial complexity if the FVS is to be learned and of bounded size. Moreover, we provide an
approximate and much faster greedy algorithm when the FVS is unknown and large. In the second
case, the FVS nodes are taken to be latent variables. In this case, the structure learning problem
corresponds to the (exact or approximate) decomposition of an inverse covariance matrix into the
sum of a tree-structured matrix and a low-rank matrix. We propose an algorithm that iterates between
two projections, which can also be interpreted as alternating low-rank corrections. We prove that
even though the second projection is onto a highly non-convex set, it is carried out exactly, thanks
to the properties of GGMs of this family. By carefully incorporating efficient inference into the
learning steps, we can further reduce the complexity to O(kn2 + n2 log n) per iteration. We also
perform experiments using both synthetic data and real data of flight delays to demonstrate the
modeling capacity with FVSs of various sizes. We show that empirically the family of GGMs of
size O(log n) strikes a good balance between the modeling capacity and efficiency.
Related Work In the context of classification, the authors of [15] have proposed the tree augmented naive Bayesian model, where the class label variable itself can be viewed as a size-one
observed FVS; however, this model does not naturally extend to include a larger FVS. In [16], a
convex optimization framework is proposed to learn GGMs with latent variables, where conditioned
on a small number of latent variables, the remaining nodes induce a sparse graph. In our setting with
latent FVSs, we further require the sparse subgraph to have tree structure.
2
Preliminaries
Each undirected graphical model has an underlying graph G = (V, E), where V denotes the set of
vertices (nodes) and E the set of edges. Each node s ? V corresponds to a random variable xs .
When the random vector xV is jointly Gaussian, the model is a GGM with density function given
by p(x) = Z1 exp{? 12 xT Jx + hT x}, where J is the information matrix or precision matrix, h is
the potential vector, and Z is the partition function. The parameters J and h are related to the mean
? and covariance matrix ? by ? = J ?1 h and ? = J ?1 . The structure of the underlying graph is
revealed by the sparsity pattern of J: there is an edge between i and j if and only if Jij 6= 0.
Given samples {xi }si=1 independently generated from
Ps an unknown distribution q in the family Q,
the ML estimate is defined as qML = arg minq?Q i=1 log q(xi ). For Gaussian distributions, the
P
? where the empirical mean ?
? ?),
? = 1s si=1 xi and the
empirical distribution is p?(x) = N (x; ?,
? = 1 Ps xi xi T ? ?
??
? T . The Kullback-Leibler (K-L) divergence
empirical covariance matrix ?
i=1
s
?
between two distributions p and q is defined as DKL (p||q) = p(x) log p(x)
q(x) dx. Without loss of
generality, we assume in this paper the means are zero.
2
Tree-structured models are models whose underlying graphs do not have cycles. The ML estimate
of a tree-structured model can be computed exactly using the Chow-Liu algorithm [2]. We use
? and ECL = CLE (?)
? to denote respectively the covariance matrix and the set of edges
?CL = CL(?)
?
learned using the Chow-Liu algorithm where the samples have empirical covariance matrix ?.
3
Gaussian Graphical Models with Known FVSs
In this section we briefly discuss some of the ideas related to GGMs with FVSs of size k, where we
will also refer to the nodes in the FVS as feedback nodes. An example of a graph and its FVS is
given in Figure 1, where the full graph (Figure 1a) becomes a cycle-free graph (Figure 1b) if nodes
1 and 2 are removed, and thus the set {1, 2} is an FVS.
.9
.9
.4
.3
.4
.3
.2
.5
.6
.7
.8
.5
.6
.7
.8
.1
.
(a)
(b)
.
Figure 1: A graph with an FVS of size 2. (a) Full graph; (b) Treestructured subgraph after removing nodes 1 and 2
Graphs with small FVSs have been studied in various contexts. The authors of [17] have characterized the family of graphs with small FVSs and their obstruction sets (sets of forbidden minors).
FVSs are also related to the ?stable sets? in the study of tournaments [18].
1
1
Given a GGM with an FVS of size k (where the FVS may or may not be given), the marginal
means and variances ?i = J ?1 h i and ?ii = J ?1 ii , for ?i ? V can be computed exactly
with complexity O(k 2 n) using the feedback message passing (FMP) algorithm proposed in [14],
where standard BP is employed two times on the cycle-free subgraph among the non-feedback nodes
while a special message-passing protocol is used for the FVS nodes. We provide a new algorithm
in Appendix D, to compute det J, the determinant of J, and hence the partition function of such a
model with complexity O(k 2 n). The algorithm is described and proved in Appendix A.
An important point to note is that the complexity of these algorithms depends simply on the size k
and the number of nodes n. There is no loss in generality in assuming that the size-k FVS F is fully
connected and each of the feedback nodes has edges to every non-feedback node. In particular, after
re-ordering the nodes so that the elements of F are
the first k nodes (T = V \F is the set of nonT
JF JM
feedback nodes of size n ? k), we have that J =
0, where JT 0 corresponds to
JM JT
a tree-structured subgraph among the non-feedback nodes, JF 0 corresponds to a complete graph
T
among the feedback
nodes,and all entries of JM may be non-zero as long as JT ? JM JF?1 JM
0
T
?F ?M
(while ? =
= J ?1 0 ). We will refer to the family of such models with a given
? M JT
FVS F as QF , and the class of models with some FVS of size at most k as Qk .1 If we are not
explicitly given an FVS, though the problem of finding an FVS of minimal size is NP-complete, the
authors of [19] have proposed an efficient algorithm with complexity O(min{m log n, n2 }), where
m is the number of edges, that yields an FVS at most twice the minimum size (thus the inference
complexity is increased only by a constant factor). However, the main focus of this paper, explored
in the next section, is on learning models with small FVSs (so that when learned, the FVS is known).
As we will see, the complexity of such algorithms is manageable. Moreover, as our experiments will
demonstrate, for many problems, quite modestly sized FVSs suffice.
4
Learning GGMs with Observed or Latent FVS of Size k
In this section, we study the problem of recovering a GGM from i.i.d. samples, where the feedback
nodes are either observed or latent variables. If all nodes are observed, the empirical distribution
1
In general a graph does not have a unique FVS. The family of graphs with FVSs of size k includes all
graphs where there exists an FVS of size k.
3
?F ?
?T
?
M
?M ?
? T . If the feedback
?
?T .
nodes are latent variables, the empirical distribution p?(xT ) has empirical covariance matrix ?
With a slight abuse of notation, for a set A ? V, we use q(xA ) to denote the marginal distribution
of xA under a distribution q(xV ).
?=
p?(xF , xT ) is parametrized by the empirical covariance matrix ?
4.1 When All Nodes Are Observed
When all nodes are observed, we have two cases: 1) When an FVS of size k is given, we propose
the conditioned Chow-Liu algorithm, which computes the exact ML estimate efficiently; 2) When
no FVS is given a priori, we propose both an exact algorithm and a greedy approximate algorithm
for computing the ML estimate.
4.1.1 Case 1: An FVS of Size k Is Given.
When a size-k FVS F is given, the learning problem becomes solving
pML (xF , xT ) =
p(xF , xT )||q(xF , xT )).
arg min DKL (?
(1)
q(xF ,xT )?QF
This optimization problem is defined on a highly non-convex set QF with combinatorial structures:
indeed, there are (n ? k)n?k?2 possible spanning trees among the subgraph induced by the nonfeedback nodes. However, we are able to solve Problem (1) exactly using the conditioned Chow-Liu
algorithm described in Algorithm 1.2 The intuition behind this algorithm is that even though the
entire graph is not tree, the subgraph induced by the non-feedback nodes (which corresponds to
the distribution of the non-feedback nodes conditioned on the feedback nodes) has tree structure,
and thus we can find the best tree among the non-feedback nodes using the Chow-Liu algorithm
applied on the conditional distribution. To obtain a concise expression, we also exploit a property of
Gaussian distributions: the conditional information matrix (the information matrix of the conditional
distribution) is simply a submatrix of the whole information matrix. In Step 1 of Algorithm 1, we
compute the conditional covariance matrix using the Schur complement, and then in Step 2 we
use the Chow-Liu algorithm to obtain the best approximate ?CL (whose inverse is tree-structured).
In Step 3, we match exactly the covariance matrix among the feedback nodes and the covariance
matrix between the feedback nodes and the non-feedback nodes. For the covariance matrix among
the non-feedback nodes, we add the matrix subtracted in Step 1 back to ?CL . Proposition 1 states
the correctness and the complexity of Algorithm 1. Its proof included in Appendix B.We denote the
?
output covariance matrix of this algorithm as CCL(?).
Algorithm 1 The conditioned Chow-Liu algorithm
? 0 and an FVS F
Input: ?
Output: EML and ?ML
?T .
? T |F = ?
?T ? ?
?M?
? ?1 ?
1. Compute the conditional covariance matrix ?
M
F
? T |F ) and ECL = CLE (?
? T |F ).
2. Let ?CL = CL(?
?
?T
?
?
M
3. EML = ECL and ?ML = ? F
?M?
? ?1 ?
?T .
?M ?CL + ?
M
F
Proposition 1. Algorithm 1 computes the ML estimate ?ML and EML , exactly with complexity
?
O(kn2 + n2 log n). In addition, all the non-zero entries of JML = ??1
ML can be computed with
extra complexity O(k 2 n).
4.1.2 Case 2: The FVS Is to Be Learned
Structure learning becomes more computationally involved when the FVS is unknown. In this subsection, we present both exact and approximate algorithms for learning models with FVS of size
no larger than k (i.e., in Qk ). For a fixed empirical distribution p?(xF , xT ), we define d(F ), a set
function of the FVS F as the minimum value of (1), i.e.,
2
Note that the conditioned Chow-Liu algorithm here is different from other variations of the Chow-Liu
algorithm such as in [20] where the extensions are to enforce the inclusion or exclusion of a set of edges.
4
d(F ) =
min
q(xF ,xT )?QF
DKL (?
p(xF , xT )||q(xF , xT )).
(2)
When
the FVS is unknown, the ML estimate can be computed exactly by enumerating all possible
n
FVSs of size k to find the F that minimizes d(F ). Hence, the exact solution can be obtained
k
with complexity O(nk+2 k), which is polynomial in n for fixed k. However, as our empirical results
suggest, choosing k = O(log(n)) works well, leading to quasi-polynomial complexity even for this
exact algorithm. That observation notwithstanding, the following greedy algorithm (Algorithm 2),
which, at each iteration, selects the single best node to add to the current set of feedback nodes, has
polynomial complexity for arbitrarily large FVSs. As we will demonstrate, this greedy algorithm
works extremely well in practice.
Algorithm 2 Selecting an FVS by a greedy approach
Initialization: F0 = ?
For t = 1 to k,
kt? = arg min d(Ft?1 ? {k}), Ft = Ft?1 ? {kt? }
k?V \Ft?1
4.2
When the FVS Nodes Are Latent Variables
When the feedback nodes are latent variables, the marginal distribution of observed variables (the
? ?1 = JT ?JM J ?1 J T . If the
non-feedback nodes in the true model) has information matrix J?T = ?
M
T
F
?
exact JT is known, the learning problem is equivalent to decomposing a given inverse covariance
T 3
. In general,
matrix J?T into the sum of a tree-structured matrix JT and a rank-k matrix ?JM JF?1 JM
use the ML criterion
qML (xF , xT ) = arg
min
q(xF ,xT )?QF
DKL (?
p(xT )||q(xT )),
(3)
where the optimization is over all nodes (latent and observed) while the K-L divergence in the
objective function is defined on the marginal distribution of the observed nodes only.
We propose the latent Chow-Liu algorithm, an alternating projection algorithm that is a variation
of the EM algorithm and can be viewed as an instance of the majorization-minimization algorithm.
The general form of the algorithm is as follows:
1. Project onto the empirical distribution:
p?(t) (xF , xT ) = p?(xT )q (t) (xF |xT ).
2. Project onto the best fitting structure on all variables:
q (t+1) (xF , xT ) = arg
min
q(xF ,xT )?QF
DKL (?
p(t) (xF , xT )||q(xF , xT )).
In the first projection, we obtain a distribution (on both observed and latent variables) whose
marginal (on the observed variables) matches exactly the empirical distribution while maintaining
the conditional distribution (of the latent variables given the observed ones). In the second projection we compute a distribution (on all variables) in the family considered that is the closest to the
distribution obtained in the first projection. We found that among various EM type algorithms, this
formulation is the most revealing for our problems because it clearly relates the second projection
to the scenario where an FVS F is both observed and known (Section 4.1.1). Therefore, we are able
to compute the second projection exactly even though the graph structure is unknown (which allows
any tree structure among the observed nodes). Note that when the feedback nodes are latent, we do
3
It is easy to see that different models having the same JM JF?1 JM cannot be distinguished using the samples, and thus without loss of generality we can assume JF is normalized to be the identify matrix in the final
solution.
5
not need to select the FVS since it is simply the set of latent nodes. This is the source of the simplification when we use latent nodes for the FVS: We have no search of sets of observed variables to
include in the FVS.
Algorithm 3 The latent Chow-Liu algorithm
?T
Input: the empirical covariance matrix
?
T
JF JM
Output: information matrix J =
, where JT is tree-structured
JM JT
?
T ?
(0)
(0)
JM
J
?.
1. Initialization: J (0) = ? F
(0)
(0)
JM
JT
2. Repeat for t = 1, 2, 3, . . .:
(a) P1: Project
?
? to the empirical distribution:
(t)
(t)
?1
JF
(JM )T
? (t) = J?(t)
?. Define ?
J?(t) = ? (t) ?1
.
(t)
(t)
(t)
?T
J
?
+ J (J )?1 (J )T
M
M
F
M
(b) P2: Project
?
? to the best fitting structure:
T
? (t)
? (t)
?
?
M
?
? F
? (t) ),
?(t+1) = ?
?1
T ? = CCL(?
(t)
(t)
(t)
(t)
(t)
?
?
?
?
?
CL(
?
)
+
?
?
?
?
M
M
F
M
T |F
?1
T
? (t) = ?
? (t) ? ?
? (t) ?
? (t)
? (t) . Define J (t+1) = ?(t+1) ?1 .
where ?
?
T
M
F
M
T |F
In Algorithm 3 we summarize the latent Chow-Liu algorithm specialized for our family of GGMs,
where both projections have exact closed-form solutions and exhibit complementary structure?one
using the covariance and the other using the information parametrization. In projection P1, three
blocks of the information matrix remain the same; In projection P2, three blocks of the covariance
matrix remain the same.
The two projections in Algorithm 3 can also be interpreted as alternating low-rank corrections :
indeed,
#
"
# "
?1
(t)
0
0
T
JF
(t)
(t)
(t)
(t)
?
?1
JF
,
In P1
J
=
+
JF
JM
(t)
?T
?
0
JM
" (t) #
?1
T
?
0
0
?
(t)
(t)
(t+1)
F
? (t)
?
?
and in P2 ?
=
+
?
,
?F
?M
F
? T |F )
? (t)
0 CL(?
?
M
where the second terms of both expressions are of low-rank when the size of the latent FVS is small.
This formulation is the most intuitive and simple, but a naive implementation of Algorithm 3 has
complexity O(n3 ) per iteration, where the bottleneck is inverting full matrices J?(t) and ?(t+1) .
By carefully incorporating the inference algorithms into the projection steps, we are able to further
exploit the power of the models and reduce the per-iteration complexity to O(kn2 +n2 log n), which
is the same as the complexity of the conditioned Chow-Liu algorithm alone. We have the following
proposition.
Proposition 2. Using Algorithm 3, the objective function of (3) decreases with the number of itera? T )||N (0, ?(t+1) )) ? N (0, ?
? T )||N (0, ?(t) )). Using an accelerated version
tions, i.e., DKL (N (0, ?
T
T
of Algorithm 3, the complexity per iteration is O(kn2 + n2 log n).
Due to the page limit, we defer the description of the accelerated version (the accelerated latent
Chow-Liu algorithm) and the proof of Proposition 2 to Appendix C. In fact, we never need to ex? T in the accelerated version.
plicitly invert the empirical covariance matrix ?
As a rule of thumb, we often use the spanning tree obtained by the standard Chow-Liu algorithm as
an initial tree among the observed nodes. But note that P2 involves solving a combinatorial problem
exactly, so the algorithm is able to jump among different graph structures which reduces the chance
6
FBM true model: KL=0
Best Spanning Tree: KL=4.055
CLRG: KL=4.007
NJ: KL=8.974
1?FVS: KL=1.881
Figure 2: From left to right: 1) The true model (fBM with 64 time samples); 2) The best spanning
tree; 3) The latent tree learned using the CLRG algorithm in [21]; 4) The latent tree learned using
the NJ algorithm in [21]; 5) The model with a size-one latent FVS learned using Algorithm 3. The
gray scale is normalized for visual clarity.
20
0.5
0
0
5
10
15
Size of Latent FVS
(a) 32 nodes
20
3
2
1
0
0
5
10
15
Size of Latent FVS
5
0
0
20
(b) 64 nodes
K?L Divergence
1
4
K?L Divergence
K?L Divergence
K?L Divergence
10
1.5
5
10
15
Size of Latent FVS
(c) 128 nodes
20
15
10
5
0
5
10
15
Size of Latent FVS
20
(d) 256 nodes
Figure 3: The relationship between the K-L divergence and the latent FVS size. All models are
learned using Algorithm 3 with 40 iterations.
of getting stuck at a bad local minimum and gives us much more flexibility in initializing graph
structures. In the experiments, we will demonstrate that Algorithm 3 is not sensitive to the initial
graph structure.
5
Experiments
In this section, we present experimental results for learning GGMs with small FVSs, observed or
latent, using both synthetic data and real data of flight delays.
Fractional Brownian Motion: Latent FVS We consider a fractional Brownian motion (fBM)
with Hurst parameter H = 0.2 defined on the time interval (0, 1]. The covariance function is
?(t1 , t2 ) = 12 (|t1 |2H + |t2 |2H ? |t1 ? t2 |2H ). Figure 2 shows the covariance matrices of approximate models using spanning trees (learned by the Chow-Liu algorithm), latent trees (learned by
the CLRG and NJ algorithms in [21]) and our latent FVS model (learned by Algorithm 3) using 64
time samples (nodes). We can see that in the spanning tree the correlation decays quickly (in fact
exponentially) with distance, which models the fBM poorly. The latent trees that are learned exhibit
blocky artifacts and have little or no improvement over the spanning tree measured in the K-L divergence. In Figure 3, we plot the K-L divergence (between the true model and the learned models
using Algorithm 3) versus the size of the latent FVSs for models with 32, 64, 128, and 256 time
samples respectively. For these models, we need about 1, 3, 5, and 7 feedback nodes respectively
to reduce the K-L divergence to 25% of that achieved by the best spanning tree model. Hence, we
speculate that empirically k = O(log n) is a proper choice of the size of the latent FVS. We also
study the sensitivity of Algorithm 3 to the initial graph structure. In our experiments, for different
initial structures, Algorithm 3 converges to the same graph structures (that give the K-L divergence
as shown in Figure 3) within three iterations.
Performance of the Greedy Algorithm: Observed FVS In this experiment, we examine the
performance of the greedy algorithm (Algorithm 2) when the FVS nodes are observed. For each run,
we construct a GGM that has 20 nodes and an FVS of size three as the true model. We first generate
a random spanning tree among the non-feedback nodes. Then the corresponding information matrix
J is also randomly generated: non-zero entries of J are drawn i.i.d. from the uniform distribution
U [?1, 1] with a multiple of the identity matrix added to ensure J 0. From each generated
GGM, we draw 1000 samples and use Algorithm 2 to learn the model. For 100 runs that we have
performed, we recover the true graph structures successfully. Figure 4 shows the graphs (and the
K-L divergence) obtained using the greedy algorithm for a typical run. We can see that we have the
most divergence reduction (from 12.7651 to 1.3832) when the first feedback node is selected. When
the size of the FVS increases to three (Figure 4e), the graph structure is recovered correctly.
7
7
6
5
4
3
8
7
2
9
1
10
20
11
19
12
18
13
14
15
16
6
5
4
10
20
11
19
(a) True Model
18
13
14
15
16
9
1
10
20
11
19
12
18
13
17
14
15
16
(c) KL=1.3832
6
5
4
3
8
2
17
(b) KL=12.7651
7
3
8
1
12
4
7
2
9
17
5
6
3
8
7
2
9
1
10
20
11
19
12
18
13
14
15
16
6
5
4
3
8
2
9
1
10
20
11
19
12
17
18
13
(d) KL=0.6074
14
15
16
17
(e) KL=0.0048
Figure 4: Learning a GGM using Algorithm 2. The thicker blue lines represent the edges among
the non-feedback nodes and the thinner red lines represent other edges. (a) True model; (b) Treestructured model (0-FVS) learned from samples; (c) 1-FVS model; (d) 2-FVS model; (e) 3-FVS
model.
(a) Spanning Tree
(b) 1-FVS GGM
(c) 3-FVS GGM
(d) 10-FVS GGM
Figure 5: GGMs for modeling flight delays. The red dots denote selected feedback nodes and the
blue lines represent edges among the non-feedback nodes (other edges involving the feedback nodes
are omitted for clarity).
Flight Delay Model: Observed FVS In this experiment, we model the relationships among airports for flight delays. The raw dataset comes from RITA of the Bureau of Transportation Statistics.
It contains flight information in the U.S. from 1987 to 2008 including information such as scheduled
departure time, scheduled arrival time, departure delay, arrival delay, cancellation, and reasons for
cancellation for all domestic flights in the U.S. We want to model how the flight delays at different
airports are related to each other using GGMs. First, we compute the average departure delay for
each day and each airport (of the top 200 busiest airports) using data from the year 2008. Note that
the average departure delays does not directly indicate whether an airport is one of the major airports
that has heavy traffic. It is interesting to see whether major airports (especially those notorious for
delays) correspond to feedback nodes in the learned models. Figure 5a shows the best tree-structured
graph obtained by the Chow-Liu algorithms (with input being the covariance matrix of the average
delay). Figure 5b?5d show the GGMs learned using Algorithm 2. It is interesting that the first node
selected is Nashville (BNA), which is not one of the top ?hubs? of the air system. The reason is
that much of the statistical relationships related to those hubs are approximated well enough, when
we consider a 1-FVS approximation, by a spanning tree (excluding BNA) and it is the breaking of
the cycles involving BNA that provide the most reduction in K-L divergence over a spanning tree.
Starting with the next node selected in our greedy algorithm, we begin to see hubs being chosen.
In particular, the first ten airports selected in order are: BNA, Chicago, Atlanta, Oakland, Newark,
Dallas, San Francisco, Seattle, Washington DC, Salt Lake City. Several major airports on the coasts
(e.g., Los Angeles and JFK) are not selected, as their influence on delays at other domestic airports
is well-captured with a tree structure.
6
Future Directions
Our experimental results demonstrate the potential of these algorithms, and, as in the work [14],
suggests that choosing FVSs of size O(log n) works well, leading to algorithms which can be scaled
to large problems. Providing theoretical guarantees for this scaling (e.g., by specifying classes of
models for which such a size FVS provides asymptotically accurate models) is thus a compelling
open problem. In addition, incorporating complexity into the FVS-order problem (e.g., as in AIC
or BIC) is another direction we are pursuing. Moreover, we are also working towards extending our
results to the non-Gaussian settings.
Acknowledgments
This research was supported in part by AFOSR under Grant FA9550-12-1-0287.
8
References
[1] J. Pearl, ?A constraint propagation approach to probabilistic reasoning,? Proc. Uncertainty in
Artificial Intell. (UAI), 1986.
[2] C. Chow and C. Liu, ?Approximating discrete probability distributions with dependence trees,?
IEEE Trans. Inform. Theory, vol. 14, no. 3, pp. 462?467, 1968.
[3] M. Choi, V. Chandrasekaran, and A. Willsky, ?Exploiting sparse Markov and covariance structure in multiresolution models,? in Proc. 26th Annu. Int. Conf. on Machine Learning. ACM,
2009, pp. 177?184.
[4] M. Comer and E. Delp, ?Segmentation of textured images using a multiresolution Gaussian
autoregressive model,? IEEE Trans. Image Process., vol. 8, no. 3, pp. 408?420, 1999.
[5] C. Bouman and M. Shapiro, ?A multiscale random field model for Bayesian image segmentation,? IEEE Trans. Image Process., vol. 3, no. 2, pp. 162?177, 1994.
[6] D. Karger and N. Srebro, ?Learning Markov networks: Maximum bounded tree-width graphs,?
in Proc. 12th Annu. ACM-SIAM Symp. on Discrete Algorithms, 2001, pp. 392?401.
[7] M. Jordan, ?Graphical models,? Statistical Sci., pp. 140?155, 2004.
[8] P. Abbeel, D. Koller, and A. Ng, ?Learning factor graphs in polynomial time and sample complexity,? J. Machine Learning Research, vol. 7, pp. 1743?1788, 2006.
[9] A. Dobra, C. Hans, B. Jones, J. Nevins, G. Yao, and M. West, ?Sparse graphical models for
exploring gene expression data,? J. Multivariate Anal., vol. 90, no. 1, pp. 196?212, 2004.
[10] M. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? J. Machine Learning Research, vol. 1, pp. 211?244, 2001.
[11] J. Friedman, T. Hastie, and R. Tibshirani, ?Sparse inverse covariance estimation with the graphical lasso,? Biostatistics, vol. 9, no. 3, pp. 432?441, 2008.
[12] P. Ravikumar, G. Raskutti, M. Wainwright, and B. Yu, ?Model selection in Gaussian graphical
models: High-dimensional consistency of l1-regularized MLE,? Advances in Neural Information Processing Systems (NIPS), vol. 21, 2008.
[13] V. Vazirani, Approximation Algorithms.
New York: Springer, 2004.
[14] Y. Liu, V. Chandrasekaran, A. Anandkumar, and A. Willsky, ?Feedback message passing for
inference in Gaussian graphical models,? IEEE Trans. Signal Process., vol. 60, no. 8, pp.
4135?4150, 2012.
[15] N. Friedman, D. Geiger, and M. Goldszmidt, ?Bayesian network classifiers,? Machine learning, vol. 29, no. 2, pp. 131?163, 1997.
[16] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky, ?Latent variable graphical model selection
via convex optimization,? in Communication, Control, and Computing (Allerton), 2010 48th
Annual Allerton Conference on. IEEE, 2010, pp. 1610?1613.
[17] M. Dinneen, K. Cattell, and M. Fellows, ?Forbidden minors to graphs with small feedback
sets,? Discrete Mathematics, vol. 230, no. 1, pp. 215?252, 2001.
[18] F. Brandt, ?Minimal stable sets in tournaments,? J. Econ. Theory, vol. 146, no. 4, pp. 1481?
1499, 2011.
[19] V. Bafna, P. Berman, and T. Fujito, ?A 2-approximation algorithm for the undirected feedback
vertex set problem,? SIAM J. Discrete Mathematics, vol. 12, p. 289, 1999.
[20] S. Kirshner, P. Smyth, and A. W. Robertson, ?Conditional Chow-Liu tree structures for modeling discrete-valued vector time series,? in Proceedings of the 20th conference on Uncertainty
in artificial intelligence. AUAI Press, 2004, pp. 317?324.
[21] M. J. Choi, V. Y. Tan, A. Anandkumar, and A. S. Willsky, ?Learning latent tree graphical
models,? Journal of Machine Learning Research, vol. 12, pp. 1729?1770, 2011.
9
| 5158 |@word determinant:1 version:3 briefly:1 manageable:1 polynomial:6 open:1 covariance:24 decomposition:1 concise:1 reduction:2 initial:4 liu:22 contains:1 series:1 selecting:1 karger:1 current:1 recovered:1 si:2 dx:1 chicago:1 partition:4 jfk:1 plot:1 alone:1 greedy:9 selected:6 intelligence:1 parametrization:1 fa9550:1 iterates:1 provides:1 node:73 allerton:2 brandt:1 prove:1 fitting:2 symp:1 indeed:2 p1:3 examine:1 automatically:1 little:1 jm:17 domestic:2 becomes:3 project:4 itera:1 bounded:3 underlying:4 moreover:3 suffice:1 notation:1 biostatistics:1 begin:1 interpreted:2 minimizes:1 finding:3 nj:3 guarantee:1 pseudo:2 fellow:1 every:1 auai:1 thicker:1 exactly:14 scaled:1 classifier:1 control:1 medical:1 grant:1 t1:3 local:1 thinner:1 xv:2 limit:1 dallas:1 approximately:1 abuse:1 tournament:2 twice:1 initialization:2 studied:2 suggests:1 specifying:1 limited:1 unique:1 acknowledgment:1 nevins:1 practice:1 block:2 eml:3 empirical:17 revealing:1 projection:13 induce:1 suggest:1 onto:3 cannot:1 selection:2 context:2 influence:1 equivalent:2 demonstrated:1 transportation:1 regardless:2 starting:1 independently:1 convex:4 minq:1 identifying:1 rule:1 variation:2 tan:1 exact:12 smyth:1 rita:1 element:1 robertson:1 expensive:1 approximated:1 observed:23 ft:4 solved:2 initializing:1 busiest:1 cycle:7 connected:1 ordering:1 trade:2 removed:1 decrease:1 intuition:1 complexity:25 solving:2 localization:1 efficiency:3 textured:1 qml:2 comer:1 various:4 artificial:2 choosing:2 whose:5 quite:1 widely:3 larger:5 solve:1 valued:1 statistic:1 jointly:2 itself:1 final:1 propose:5 jij:1 nashville:1 subgraph:6 flexibility:1 poorly:1 multiresolution:2 intuitive:1 description:1 getting:1 los:1 seattle:1 ecl:3 exploiting:1 p:2 extending:1 converges:1 tions:1 measured:1 minor:2 p2:4 recovering:1 involves:1 come:1 indicate:1 berman:1 direction:2 kirshner:1 require:1 abbeel:1 preliminary:1 proposition:5 extension:2 exploring:1 correction:3 considered:1 exp:1 major:3 jx:1 omitted:1 estimation:2 proc:3 label:1 combinatorial:2 sensitive:1 treestructured:2 correctness:1 successfully:1 city:1 minimization:1 mit:2 clearly:1 bna:4 gaussian:12 fujito:1 focus:2 improvement:1 rank:7 likelihood:2 inference:15 entire:1 chow:20 koller:1 quasi:1 selects:1 interested:2 arg:5 classification:1 among:15 priori:1 special:1 airport:10 marginal:6 field:4 construct:1 never:1 having:1 washington:1 ng:1 represents:1 jones:1 yu:1 thin:4 future:1 np:2 t2:3 randomly:1 divergence:14 intell:1 ccl:2 friedman:2 atlanta:1 message:5 highly:3 investigate:1 blocky:1 diagnostics:1 behind:1 kt:2 accurate:1 edge:11 modest:1 tree:48 re:1 theoretical:1 minimal:2 bouman:1 increased:1 instance:1 modeling:10 compelling:1 vertex:4 entry:3 uniform:1 delay:14 eec:2 synthetic:3 thanks:1 density:1 sensitivity:1 siam:2 probabilistic:1 off:2 quickly:1 yao:1 conf:1 leading:2 potential:2 parrilo:1 speculate:1 includes:1 int:1 explicitly:1 depends:1 performed:1 break:2 closed:1 traffic:1 red:2 recover:1 defer:1 majorization:1 air:1 ggm:11 variance:2 qk:2 efficiently:2 correspond:2 yield:1 identify:1 bayesian:5 thumb:1 raw:1 inform:1 pp:17 involved:1 naturally:1 proof:2 proved:1 dataset:1 massachusetts:2 subsection:1 fractional:2 segmentation:2 carefully:2 back:1 dobra:1 tipping:1 day:1 formulation:2 though:4 generality:3 xa:2 correlation:1 flight:11 working:1 replacing:1 multiscale:1 kn2:7 propagation:2 artifact:1 gray:1 scheduled:2 normalized:2 true:8 hence:5 regularization:1 alternating:4 leibler:1 width:4 criterion:1 plate:1 complete:3 demonstrate:6 motion:2 l1:1 reasoning:1 image:5 coast:1 specialized:1 raskutti:1 empirically:2 salt:1 exponentially:1 extend:1 slight:1 refer:2 grid:1 consistency:1 mathematics:2 inclusion:1 cancellation:2 dot:1 stable:2 f0:1 han:1 add:2 closest:1 brownian:2 forbidden:2 exclusion:1 delp:1 multivariate:1 scenario:1 arbitrarily:2 accomplished:1 captured:1 minimum:3 greater:1 additional:1 employed:1 strike:1 signal:1 ii:2 relates:1 full:5 multiple:1 reduces:1 fbm:4 alan:1 faster:1 characterized:1 xf:17 match:2 long:1 ravikumar:1 mle:1 dkl:6 involving:2 regression:1 iteration:8 represent:4 invert:1 achieved:1 addition:2 want:1 interval:1 source:1 jml:1 extra:1 rest:1 induced:2 undirected:3 schur:1 jordan:1 anandkumar:2 hurst:1 revealed:1 easy:1 enough:1 bic:1 hastie:1 lasso:2 identified:1 reduce:3 idea:1 knowing:1 enumerating:1 det:1 angeles:1 bottleneck:1 whether:2 expression:3 passing:5 york:1 useful:1 obstruction:1 ten:1 generate:1 specifies:1 shapiro:1 estimated:1 per:5 correctly:1 tibshirani:1 blue:2 econ:1 discrete:5 vol:14 independency:1 drawn:1 clarity:2 ht:1 graph:37 asymptotically:1 sum:3 year:1 run:3 inverse:5 uncertainty:2 family:14 chandrasekaran:3 pursuing:1 lake:1 geiger:1 draw:1 appendix:5 scaling:1 submatrix:1 guaranteed:1 simplification:1 aic:1 quadratic:1 annual:1 constraint:2 bp:2 n3:1 min:6 extremely:1 department:2 influential:1 structured:10 membrane:1 remain:2 em:2 pml:1 notorious:1 taken:1 computationally:2 discus:1 oakland:1 tractable:2 junction:4 decomposing:2 enforce:1 distinguished:1 subtracted:1 bureau:1 denotes:1 remaining:1 include:3 ensure:1 top:2 graphical:14 maintaining:1 exploit:2 especially:1 approximating:1 objective:2 added:1 dependence:1 modestly:1 exhibit:2 distance:1 sci:1 capacity:6 parametrized:1 spanning:12 reason:2 willsky:6 assuming:1 modeled:1 relationship:3 providing:1 balance:1 bafna:1 ying:1 implementation:1 anal:1 proper:1 unknown:6 perform:2 observation:1 markov:5 cattell:1 communication:2 excluding:1 dc:1 arbitrary:1 complement:1 inverting:1 kl:9 z1:1 learned:16 pearl:1 nip:1 trans:4 beyond:2 able:4 pattern:1 departure:4 sparsity:1 summarize:1 including:4 belief:1 wainwright:1 power:1 regularized:1 technology:2 carried:1 naive:2 removal:2 afosr:1 loss:3 fully:1 interesting:2 srebro:1 versus:1 degree:5 heavy:1 qf:6 repeat:1 supported:1 free:2 allow:1 institute:2 sparse:9 feedback:35 cle:2 computes:2 autoregressive:1 author:4 collection:1 jump:1 stuck:1 san:1 social:2 vazirani:1 approximate:8 kullback:1 gene:2 ml:13 robotic:1 uai:1 francisco:1 xi:5 search:1 latent:39 regulatory:1 learn:2 excellent:1 cl:9 protocol:1 main:2 whole:1 arrival:2 n2:8 complementary:1 augmented:1 west:1 precision:1 breaking:1 removing:1 choi:2 annu:2 bad:1 xt:22 jt:10 hub:4 showing:1 explored:1 x:1 decay:1 evidence:1 incorporating:4 exists:1 notwithstanding:1 conditioned:7 nk:1 simply:3 visual:1 springer:1 corresponds:5 chance:1 acm:2 conditional:8 viewed:2 sized:1 identity:1 towards:1 jf:11 included:2 typical:1 total:2 called:1 gauss:2 experimental:2 fvs:78 indicating:1 select:1 ggms:16 goldszmidt:1 newark:1 relevance:1 accelerated:4 ex:1 |
4,596 | 5,159 | Global MAP-Optimality by Shrinking the
Combinatorial Search Area with Convex Relaxation
Bogdan Savchynskyy1
J?org Kappes2
Paul Swoboda2
Christoph Schn?orr1,2
1
Heidelberg Collaboratory for Image Processing, Heidelberg University, Germany
[email protected]
2
Image and Pattern Analysis Group, Heidelberg University, Germany
{kappes,swoboda,schnoerr}@math.uni-heidelberg.de
Abstract
We consider energy minimization for undirected graphical models, also known as
the MAP-inference problem for Markov random fields. Although combinatorial
methods, which return a provably optimal integral solution of the problem, made a
significant progress in the past decade, they are still typically unable to cope with
large-scale datasets. On the other hand, large scale datasets are often defined on
sparse graphs and convex relaxation methods, such as linear programming relaxations then provide good approximations to integral solutions.
We propose a novel method of combining combinatorial and convex programming techniques to obtain a global solution of the initial combinatorial problem.
Based on the information obtained from the solution of the convex relaxation, our
method confines application of the combinatorial solver to a small fraction of the
initial graphical model, which allows to optimally solve much larger problems.
We demonstrate the efficacy of our approach on a computer vision energy minimization benchmark.
1
Introduction
The focus of this paper is energy minimization for Markov random fields. In the most common
pairwise case this problem reads
X
X
min EG,? (x) := min
?v (xv ) +
?uv (xu , xv ) ,
(1)
x?XG
x?XG
v?VG
uv?EG
where G = (VG , EG ) denotes an undirected graph with the set of nodes VG 3 v and the set of
edges EG 3 uv; variables xv belong to the finite label sets Xv , v ? VG ; potentials ?v : Xv ? R,
?uv : Xu ?Xv ? R, v ? VG , uv ? EG , are associated with the nodes and the edges of G respectively.
We denote by XG the Cartesian product ?v?VG Xv .
Problem (1) is known to be NP-hard in general, hence existing methods either consider its convex
relaxations or/and apply combinatorial techniques such as branch-and-bound, combinatorial search,
cutting plane etc. on top of convex relaxations. The main contribution of this paper is a novel
method to combine convex and combinatorial approaches to compute a provably optimal solution.
The method is very general in the sense that it is not restricted to a specific convex programming
or combinatorial algorithm, although some algorithms are more preferable than others. The main
restriction of the method is the neighborhood structure of the graph G: it has to be sparse. Basic grid
graphs of image data provide examples satisfying this requirement. The method is applicable also to
higher-order problems, defined on so called factor graphs [1], however we will concentrate mainly
on the pairwise case to keep our exposition simple.
Underlying idea. Fig. 1 demonstrates the main idea of our method. Let A and B be two subgraphs
covering G. Select them so that the only common nodes of these subgraphs lie on their mutual border
1
A\?A
B\?A
?A ? ?B
label mismatch
Solve A and B separately
Check consistency on ?A
Increase B
Figure 1: Underlying idea of the proposed method: the initial graph is split into two subgraphs A
(blue+yellow) and B (red+yellow), assigned to a convex and a combinatorial solver respectively. If
the integral solutions provided by both solvers do not coincide on the common border ?A (yellow)
of the two subgraphs, the subgraph B is increased by appending mismatching nodes (green) and the
border is adjusted respectively.
?A(? ?B) defined in terms of the master-graph G. Let x?A and x?B be optimal labelings computed
independently on A and B. If these labelings coincide on the border ?A, then under some additional
conditions the concatenation of x?A and x?B is an optimal labeling for the initial problem (1), as we
show in Section 3 (see Theorem 1).
We select the subgraph A such that it contains a ?simple? part of the problem, for which the convex
relaxation is tight. This part is assigned to the respective convex program solver. The subgraph
B contains in contrast the difficult, combinatorial subproblem and is assigned to a combinatorial
solver. If the labelings x?A and x?B do not coincide on some border node v ? ?A, we (i) increase the
subgraph B by appending the node v and edges from v to B, (ii) correspondingly decrease A and
(iii) recompute x?A and x?B . This process is repeated until either labelings x?A and x?B coincide on
the border or B equals G. The sparsity of G is required to avoid fast growth of the subgraph B.
We refer to Section 3 for a detailed description of the algorithm, where we in particular specify the
initial selection of the subgraphs A and B and the methods for (i) encouraging consistency of x?A
and x?B on the boundary ?A and (ii) providing equivalent results with just a single run of the convex
relaxation solver. These techniques will be described for the local polytope relaxation, known also
as a linear programming relaxation of (1) [2, 3].
Related work. The literature on problem (1) is very broad, both regarding convex programming and
combinatorial methods. Here we will concentrate on the local polytope relaxation, that is essential
to our approach.
The local polytope relaxation (LP) of (1) was proposed and analyzed in [4] (see also the recent
review [2]). An alternative view on the same relaxation was proposed in [5]. This view appeared to
be very close to the idea of the Lagrangian or dual decomposition technique (see [6] for applications
to (1)). This idea stimulated development of efficient solvers for convex relaxations of (1). Scalable
solvers for the LP relaxation became a hot topic in recent years [7?14]. The algorithms however,
which guarantee attainment of the optimum of the convex relaxation at least theoretically, are quite
slow in practice, see e.g. comparisons in [11, 15]. Remarkably, the fastest scalable algorithms
for convex relaxations are based on coordinate descent: the diffusion algorithm [2] known from
the seventies and especially its dual decomposition based variant TRW-S [16]. There are other
closely related methods [17, 18] based on the same principle. Although these algorithms do not
guarantee attainment of the optimum, they converge [19] to points fulfilling a condition known as
arc consistency [2] or weak tree agreement [16]. We show in Section 3 that this condition plays a
significant role for our approach. It is a common observation that in the case of sparse graphs and/or
strong evidence of the unary terms ?v , v ? VG , the approximate solutions delivered by such solvers
are quite good from the practical viewpoint. The belief, that these solutions are close to optimal
ones is evidenced by numerical bounds, which these solvers provide as a byproduct.
The techniques used in combinatorial solvers specialized to problem (1) include most of the classical tools: cutting plane, combinatorial search and branch-and-bound methods were adapted to the
problem (1). The ideas of the cutting plane method form the basis for tightening the LP relaxation
within the dual decomposition framework (see the recent review [20] and references therein) and
for finding an exact solution for Potts models [21], which is a special class of problem (1). Combinatorial search methods with dynamic programming based heuristics were successfully applied
2
to problems defined on dense and fully connected but small graphs [22]. The specialized branchand-bound solvers [23, 24] also use convex (mostly LP) relaxations and/or a dynamic programming
technique to produce bounds in the course of the combinatorial search [25]. However the reported
applicability of most combinatorial solvers nowadays is limited to small graphs. Specialized solvers
like [21] scale much better, but are focused on a certain narrow class of problems.
The goal of this work is to employ the fact, that local polytope solvers provide good approximate
solutions and to restrict computational efforts of combinatorial solvers to a relatively small, and
hence tractable part of the initial problem.
Contribution. We propose a novel method for obtaining a globally optimal solution of the energy
minimization problem (1) for sparse graphs and demonstrate its performance on a series of largescale benchmark datasets. We were able to
? solve previously unsolved large-scale problems of several different types, and
? attain optimal solutions of hard instances of Potts models an order of magnitude faster than
specialized state of the art algorithms [21].
For an evaluation of our method we use datasets from the very recent benchmark [15].
Paper structure. In Section 2 we provide the definitions for the local polytope relaxation and arc
consistency. Section 3 is devoted to the specification of our algorithm. In Sections 4 and 5 we
provide results of the experimental evaluation and conclusions.
2
Preliminaries
Notation. A vector x with coordinates xv , v ? VG , will be called labeling and its coordinates
xv ? Xv ? labels. The notation x|W , W ? VG stands for the restriction of x to the subset W, i.e.
for the subvector (xv , v ? W). To shorten notation we will sometimes write xuv ? Xuv in place
of (xv , xu ) ? Xu ? Xv for (v, u) ? EG . Let also nb(v), v ? VG , denote the set of neighbors of
node v, that is the set {u ? VG : uv ? EG }.
LP relaxation. The local polytope relaxation of (1) reads (see e.g. [2])
min
??0
X X
?v (xv )?v (xv ) +
v?VG xv ?Xv
X
X
?uv (xu , xv )?uv (xu , xv )
uv?EG (xu ,xv )?Xuv
P
Pxv ?VG ?v (xv ) = 1, v ? VG
s.t. Pxv ?VG ?uv (xu , xv ) = ?u (xu ), xu ? Xu , uv ? EG
xu ?VG ?uv (xu , xv ) = ?v (xv ), xv ? Xv , uv ? EG .
(2)
This formulation is based on the overcomplete representation of indicator vectors ? constrained
to the local polytope commonly used for discrete graphical models [3]. It is well-known that the
local polytope constitutes an outer bound (relaxation) of the convex hull of all indicator vectors of
labelings (marginal polytope; cf. [3]).
The Lagrange dual of (2) reads
X
X
max
?v +
?uv
?,?
s.t.
v?VG
?v
?uv
(3)
uv?EG
P
?
??v? (xv ) := ?v (xv ) ? u?nb(v) ?v,u (xv ),
v ? VG , xv ? Xv ,
?
?
? ?uv (xu , xv ) := ?uv (xu , xv ) + ?v,u (xv ) + ?u,v (xu ), uv ? EG , (xu , xv ) ? Xuv .
In the constraints of (3) we introduced the reparametrized potentials ??? . One can see, that for any
values of the dual variables ? the reparametrized energy E??? ,G (x) is equal to the non-parametrized
one E?,G (x) for any labeling x ? XG . The objective function of the dual problem is equal
P
P
0
?? 0
?? 0
??
to D(?) :=
v?VG ?v (xv ) +
uv?EG ?uv (xuv ), where xw ? arg minxw ?Xv ?Xuv ?w (xw ). A
reparametrization, that is reparametrized potentials ??? , will be called optimal, if the corresponding
? is the solution of the dual problem (3). In general neither the optimal ? is unique nor the optimal
reparametrization.
3
Definition 1 (Strict arc consistency). We will call the node v ? VG strictly arc consistent w.r.t.
potentials ? if there exist labels x0v ? Xv and x0u ? Xu for all u ? nb(v), such that ?v (x0v ) < ?v (xv )
for all xv ? Xv \{x0v } and ?vu (x0v , x0u ) < ?vu (xv , xu ) for all (xv , xu ) ? Xvu \{(x0v , x0u )}. The label
x0v will be called locally optimal.
If all nodes v ? VG are strictly arc consistent w.r.t. the potentials ??? , the dual objective value D(?)
becomes equal to the energy
D(?) = EG,??? (x0 ) = EG,? (x0 )
(4)
of the labeling x0 constructed by the corresponding locally optimal labels. From duality it follows,
that D(?) is a lower bound for energies of all labelings EG,? (x), x ? XG . Hence attainment of
equality (4) shows that (i) ? is the solution of the dual problem (3) and (ii) x0 is the solution of both
the energy minimization problem (1) and its relaxation (2).
Strict arc consistency of all nodes is sufficient, but not necessary for attaining the optimum of the
dual objective (3). Its fulfillment means that our LP relaxation is tight, which is not always the
case. However, in many practical cases the optimal reparametrization ? corresponds to strict arc
consistency of a significant portion of, but not all graph nodes. The remaining non-consistent part is
often much smaller and consists of many separate ?islands?. The strict arc consistency of a certain
node v, even for the optimally reparametrized potentials ??? , does not guarantee global optimality
of the corresponding locally optimal label xv (unless it holds for all nodes), though it is a good and
widely used heuristic to obtain an approximate solution of the non-relaxed problem (1). In this work
we provide an algorithm, which is able to prove this optimality or discard it. The algorithm applies
combinatorial optimization techniques only to the arc inconsistent part of the model, which is often
much smaller than the whole model in applications.
Remark 1. Efficient dual decomposition based algorithms optimize dual functions, which differ
from (4) (see e.g. [6, 13, 16]), but are equivalent to it in the sense of equal optimal values. Getting
reparametrizations ??? is less straightforward in these cases, but can be efficiently computed (see
e.g. [16, Sec. 2.2]).
3
Algorithm description
The graph A = (VA , EA ) will be called an (induced) subgraph of the graph G = (VG , EG ), if
VA ? VG and EA = {uv ? EG : u, v ? VA }. The graph G will be called supergraph of A. The
subgraph ?A induced by a set of nodes V?A of the graph A, which are connected to VG \VA , is
called its boundary w.r.t. G, i.e. V?A = {v ? VA : ?uv ? EG : u ? VG \VA }. The complement B
to A\?A, given by VB = {v ? VG : v ? ?A ? (VG \VA )}, EB = {uv ? EG : u, v ? VB }, is called
boundary complement to A w.r.t. the graph G. Let A be a subgraph of G and potentials ?v , v ? VG ,
and ?uv ? EG be associated with nodes and edges of G respectively. We assume, that ?v , v ? VA ,
and ?uv ? EA are associated with the subgraph A. Hence we consider the energy function EA,? to
be defined on A together with an optimal labeling on A, which is the one that minimizes EA,? .
The following theorem formulates conditions necessary to produce an optimal labeling x? on the
subgraph G from the optimal labelings on its mutually boundary complement subgraphs A and B.
Theorem 1. Let A be a subgraph of G and B be its boundary complement w.r.t. A. Let x?A and
x?B be labelings minimizing EA,? and EB,? respectively and let all nodes v ? VA be strictly arc
consistent w.r.t. potentials ?. Then from
x?A,v = x?B,v for all v ? V?A
(5)
?
xA,v , v ? A
follows that the labeling x? with coordinates x?v =
, v ? VG , is optimal on G.
x?B,v , v ? B\A
Proof. Let ? denote potentials of the problem.
Let us define other potentials ?0 as
0,
w
?
V
?
E
?A
?A
0
?w
(xw ) :=
. Then EG,? (x) = EA,?0 (x|A ) + EB,? (x|B ). From strict
?w (xw ), w ?
/ V?A ? E?A
4
Algorithm 1
(1) Solve LP and reparametrize (G, ?) ? (G, ??? ).
(2) Initialize: (A, ??? ) and x? from arc consistent nodes.
A,v
(3) repeat
Set B as a boundary complement to A.
Compute an optimal labeling x?B on B.
If x?A |?A = x?B |?A return.
Else set C := {v ? V?A : x?A,v 6= x?B,v }, A := A\C
until C = ?
arc consistency of ? over A directly follows that EA,?0 (x?A ) = minxA EA,?0 (xA ). From this follows
min EG,? (x) = { min EA,?0 (xA ) + EB,? (xB ) s.t. xA |?A = xB |?A }
x
xA ,xB
= min
0
min
x?A xA : xA |?A =x0?A
EA,?0 (xA ) +
min
xB : xB |?A =x0?A
EB,? (xB ) ? min EA,?0 (xA ) + min EB,? (xB )
xA
=
EA,?0 (x?A )
xB
+
EB,? (x?B )
= EG,? (x? )
Now we are ready to transform the idea described in the introduction into Algorithm 1.
Step (1). As a first step of the algorithm we run an LP solver for the dual problem (3) on the
whole graph G. The output of the algorithm is the reparametrization ??? of the initial problem.
Since well-scalable algorithms for the dual problem (3) attain the optimum only in the limit after a
potentially infinite number of iterations, we cannot afford to solve it exactly. Fortunately, it is not
needed to do so and it is enough to get only a sufficiently good approximation. We will return to
this point at the end of this section.
Step (2). We assign to the set VA the nodes of the graph G, which satisfy the strict arc consistency
condition. The optimal labeling on A can be trivially computed from the reparametrized unary
potentials ??v? by x?A,v := arg minxv ??v? (xv ), v ? A.
Step (3). We define B as the boundary complement to A w.r.t. the master graph G and find an
optimal labeling x?B on the subgraph B with a combinatorial solver. If the boundary condition (5)
holds we have found the optimal labeling according to Theorem 1. Otherwise we remove the nodes
where this condition fails from A and repeat the whole step until either (5) holds or B = G.
3.1
Remarks on Algorithm 1
Encouraging boundary consistency condition. It is quite unlikely, that the optimal boundary
labeling x?A |?A obtained based only on the subgraph A coincides with the boundary labeling x?B |?A
obtained for the subgraph B. To satisfy this condition the unary potentials should be quite strong on
the border. In other words, they should be at least strictly arc consistent. Indeed they are so, since
we consider the reparametrized potentials ??? , obtained at the LP presolve step of the algorithm.
Single run of LP solver. Reparametrization allows also to perform only a single run of the LP
solver, keeping the results as if the subproblem over A has been solved at each iteration. The
following theorem states this property formally.
Theorem 2. Let all nodes of a graph A be strictly arc consistent w.r.t. potentials ??? , x be the
optimum of EA,??? and A0 be a subgraph of A. Then x|A0 optimizes EA0 ,??? .
Proof. The proof follows directly from Definition 1. Equation (4) holds for the labeling x|A0
plugged in place of x0 and graph A0 in place of G. Hence x|A0 provides a minimum of EA0 ,??? .
Presolving B for combinatorial solver. Many combinatorial solvers use linear programming relaxations as a presolving step. Reparametrization of the subproblem over the subgraph B plays the
role of such a presolver, since the optimal reparametrization corresponds to the solution of the dual
problem and makes solving the primal one easier.
Connected components analysis. It is often the case that the subgraph B consists of several connected components. We apply the combinatorial solver to each of them independently.
5
Dataset
name
|VG |
Step (1) LP (TRWS)
|Xv |
# it
time, s
E
Step (3) ILP (CPLEX)
# it time, s
|B|
E
min
max
656
233
?
tsukuba 110592
venus
166222
teddy
168750
16
20
60
250
186
369537 24
2000 3083 3048296 10
10000 14763 1345214 1
36
69
?
369218
3048043
?
130
66
2062
425632
514080
5
7
10000 20156
10000 34092
2
?
184813
?
11
109
24474 ?
family
pano
184825
169224
18
1
Table 1: Results on Middlebury datasets. The column Dataset contains the dataset name, numbers
|VG | of nodes and |Xv | of labels. Columns Step (1) and Step (3) contain number of iterations, time
and attained energy at steps (1) and (3) of Algorithm 1, corresponding to solving the LP relaxation
and use of a combinatorial solver respectively. The column |B| presents starting and final sizes
of the ?combinatorial? subgraph B. Dash ?-? stands for failure of CPLEX, due to the size of the
combinatorial subproblem.
Subgraph B growing strategy. One can consider different strategies for increasing the subgraph B,
if the boundary condition (5) does not hold. Our greedy strategy is just one possible option.
Optimality of reparametrization. As one can see, the reparametrization plays a significant role
for our algorithm: it (i) is required for Theorem 1 to hold; (ii) serves as a criterion for the initial
splitting of G into A and B; (iii) makes the local potentials on the border ?A stronger; (iv) allows
to avoid multiple runs of the LP solver, when the subgraph A shrinks; (v) can speed-up some combinatorial solvers by serving as a presolve result. However, there is no real reason to search for an
optimal reparametrization: all its mentioned functionality remains valid also if it is non-optimal. Of
course, one pays a certain price for the non-optimality: (i) the initial subgraph B becomes larger;
(ii) the local potentials ? weaker; (iii) the presolve results for the combinatorial solver become less
precise. Note that even for non-optimal reparametrizations Theorem 2 holds and we need to run the
LP solver only once.
4
Experimental evaluation
We tested our approach on problems from the Middlebury energy minimization benchmark [26] and
the recently published discrete energy minimization benchmark [15], which includes the datasets
from the first one. We have selected computer vision benchmarks intentionally, because many problems in this area fulfill our requirements: the underlying graph is sparse (typically it has a grid
structure) and the LP relaxation delivers good practical results.
Since our experiments serve mainly as proof of concept we used general, though not always the
most efficient solvers: TRW-S [16] as the LP-solver and CPLEX [27] as the combinatorial one
within the OpenGM framework [28]. Unfortunately the original version of TRW-S does not provide
information about strict arc consistency and does not output a reparametrization. Therefore we used
our own implementation in the experiments. Depending on the type of the pairwise factors (Potts,
truncated `2 or `1 -norm) we found our implementation up to an order of magnitude slower than the
freely available code of V. Kolmogorov. This fact suggests that the provided processing time can be
significantly improved in more efficient future implementations.
In the first round of our experiments we considered problems (i.e. graphical models with the specified unary and pairwise factors) of the Middlebury MRF benchmark, most of which remained unsolved, to the best of our knowledge.
MRF stereo dataset consists of 3 models: tsukuba, venus and teddy. Since the optimal integral solution of tsukuba was recently obtained by LP-solvers [11,13], we used this dataset to show
how our approach performs for clearly non-optimal reparametrizations. For this we run TRW-S for
250 iterations only. The size of the subgraph B grew from 130 to 656 nodes out of more than 100000
nodes of the original problem (see Table 1). On venus we obtained an optimal labeling after 10
iterations of our algorithm. During these iterations the size of the set B grew from 66 to 233 nodes,
which is only 0.14% of the original problem size. The dataset teddy remains unsolved: though
6
Dataset
EG,? (x? )
Step (1) LP
# it
pfau
palm
clownfish
crops
strawberry
24010.44 1000
12253.75 200
14794.18 100
11853.12 100
11766.34 100
Step (3) ILP
MCA
MPLP
time, s # it time, s
time, s
# LP it LP time, s ILP time, s
276
65
32
32
29
14
17
8
6
8
14
93
10
6
31
> 55496 10000
561
700
328
350
355
350
483
350
> 15000
1579
3701
790
181
797
1601
697
1114
Table 2: Exemplary Potts model comparison. Datasets taken from the Color segmentation (N8)
set. Column EG,? (x? ) shows the optimal energy value, columns Step (1) LP and Step (3) ILP
contain number of iterations and time spent at the steps (1) and (3) of Algorithm 1, corresponding to
solving the LP relaxation and use of a combinatorial solver respectively. The column MCA stands
for the time of the multiway-cut solver reported in [21]. The MPLP [17] column provides number
of iterations and time of the LP presolve and the time of the tightening cutting plane phase (ILP).
the size of the problem was reduced from the original 168750 to 2062 nodes, they constituted a
non-manageable task for CPLEX, presumably because of the big number of labels, 60 in each node.
MRF photomontage models are difficult for dual solvers like TRW-S because their range of values
in pairwise factors is quite large and varies from 0 to more than 500000 in a factor. Hence we used
10000 iterations of TRW-S at the first step of Algorithm 1. For the family dataset the algorithm
decreased the size of the problem for CPLEX from originally over 400000 nodes to slightly more
than 100 and found a solution of the whole problem. In contrast to family the initial subgraph B
for the panorama dataset is much larger (about 25000 nodes) and CPLEX gave up.
MRF inpainting. Though applying TRW-S to both datasets penguin and house allows to decrease the problem to about 0.5% of its original size, the resulting subgraphs B of respectively 141
and 856 nodes were too large for CPLEX, presumably because of the big number (256) of labels.
(a) Original image
(b) Kovtun?s method
(c) Our approach
(d) Optimal Labeling
Figure 2: Results for the pfau-instance from [15]. Gray pixels in (b) and (c) mark nodes that
need to be labeled by the combinatorial solver. Our approach (c) leads to much smaller combinatorial problem instances than Kovtun?s method [29] (b) used in [30]. While Kovtun?s method gets
partial optimality for 5% of the nodes only, our approach requires to solve only tiny problems by a
combinatorial solver.
Potts models. Our approach appeared to be especially efficient for Potts models. We tested it on
the following datasets from the benchmark [15]: Color segmentation (N4), Color segmentation
(N8), Color segmentation, Brain and managed to solve all 26 problem instances to optimality.
Solving Potts models to optimality is not a big issue anymore due to the recent work [21], which
related this problems to the multiway-cut problem [31] and adopted a quite efficient solver based on
the cutting plane technique. However, we were able to outperform even this specialized solver on
hard instances, which we collected in Table 2. There is indeed a simple explanation for this phenomenon: the difficult instances are those, for which the optimal labeling contains many small areas
corresponding to different labels, see e.g. Fig. 2. This is not very typical for Potts models, where an
optimal labeling typically consists of a small number of large segments. Since the number of cutting
planes, which have to be processed by the multiway-cut solver, grows with the total length of the
segment borders, the overall performance significantly drops on such instances. Our approach is
able to correctly label most of the borders when solving the LP relaxation. Since the resulting subgraph B, passed to the combinatorial solver, is quite small, the corresponding subproblems appear
7
easy to solve even for a general-purpose solver like CPLEX. Indeed, we expect an increase in the
overall performance of our method if the multiway-cut solver would be used in place of CPLEX.
For Potts models there exist methods [29,32] providing part of an optimal solution, known as partial
optimality. Often they allow to drastically simplify the problem so that it can be solved to global
optimality on the remaining variables very fast, see [30]. However for hard instances like pfau these
methods can label only a small fraction of graph nodes persistently, hence combinatorial solvers
cannot solve the rest, or require a lot of time. Our method does not provide partially optimal variables: if it cannot solve the whole problem no node can be labelled as optimal at all. On the upside
the subgraph B which is given to a combinatorial solver is typically much smaller, see Fig. 2.
For comparison we tested the MPLP solver [17], which is based on coordinate descent LP iterations and tightens the LP relaxation with the cutting plane approach described in [33].
We used its publicly available code [34].
However this solver did
not managed to solve any of the considered difficult problems (marked as unsolved in
the OpenGM Benchmark [15]), such as color-seg-n8/pfau, mrf stereo/{venus,
teddy}, mrf photomontage/{family, pano}. For easier instances of the Potts model,
we found our solver an order of magnitude faster than MPLP (see Table 2 for the exemplary comparison), though we tried different numbers of LP presolve iterations to speed up the MPLP.
Summary. Our experiments show that our method used even with quite general and not always the
most efficient solvers like TRW-S and CPLEX allows to (i) find globally optimal solutions of large
scale problem instances, which were previously unsolvable; (ii) solve hard instances of Potts models
an order of magnitude faster than with a modern specialized combinatorial multiway-cut method;
(iii) overcome the cutting-plane based MPLP method on the tested datasets.
5
Conclusions and future work
The method proposed in this paper provides a novel way of combining convex and combinatorial
algorithms to solve large scale optimization problems to a global optimum. It does an efficient
extraction of the subgraph, where the LP relaxation is not tight and combinatorial algorithms have
to be applied. Since this subgraph often corresponds to only a tiny fraction of the initial problem, the
combinatorial search becomes feasible. The method is very generic: any linear programming and
combinatorial solvers can be used to carry out the respective steps of Algorithm 1. It is particularly
efficient for sparse graphs and when the LP relaxation is almost tight.
In the future we plan to generalize the method to higher order models, tighter convex relaxations for
the convex part of our solver and apply alternative and specialized solvers both for the convex and
the combinatorial parts of our approach.
Acknowledgement. This work has been supported by the German Research Foundation (DFG) within the
program Spatio-/Temporal Graphical Models and Applications in Image Analysis, grant GRK 1653. Authors
thank A. Shekhovtsov, B. Flach, T. Werner, K. Antoniuk and V. Franc from the Center for Machine Perception
of the Czech Technical University in Prague for fruitful discussions.
References
[1] D. Koller and N. Friedman. Probabilistic Graphical Models:Principles and Techniques. MIT Press, 2009.
[2] T. Werner. A linear programming approach to max-sum problem: A review. IEEE Trans. on PAMI, 29(7),
July 2007.
[3] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Found. Trends Mach. Learn., 1(1-2):1?305, 2008.
[4] M. Schlesinger. Syntactic analysis of two-dimensional visual signals in the presence of noise. Kibernetika,
(4):113?130, 1976.
[5] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on (hyper)trees: message
passing and linear programming approaches. IEEE Trans. on Inf. Th., 51(11), 2005.
[6] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE Trans. on PAMI, 33(3):531 ?552, march 2011.
[7] B. Savchynskyy, J. H. Kappes, S. Schmidt, and C. Schn?orr. A study of Nesterov?s scheme for Lagrangian
decomposition and MAP labeling. In CVPR 2011, 2011.
8
[8] S. Schmidt, B. Savchynskyy, J. H. Kappes, and C. Schn?orr. Evaluation of a first-order primal-dual algorithm for MRF energy minimization. In EMMCVPR, pages 89?103, 2011.
[9] O. Meshi and A. Globerson. An alternating direction method for dual MAP LP relaxation.
ECML/PKDD (2), pages 470?483, 2011.
In
[10] A. F. T. Martins, M. A. T. Figueiredo, P. M. Q. Aguiar, N. A. Smith, and E. P. Xing. An augmented
Lagrangian approach to constrained MAP inference. In ICML, 2011.
[11] B. Savchynskyy, S. Schmidt, J. H. Kappes, and C. Schn?orr. Efficient MRF energy minimization via
adaptive diminishing smoothing. In UAI-2012, pages 746?755.
[12] D. V. N. Luong, P. Parpas, D. Rueckert, and B. Rustem. Solving MRF minimization by mirror descent.
In Advances in Visual Computing, volume 7431, pages 587?598. Springer Berlin Heidelberg, 2012.
[13] J. H. Kappes, B. Savchynskyy, and C. Schn?orr. A bundle approach to efficient MAP-inference by Lagrangian relaxation. In CVPR 2012, 2012.
[14] B. Savchynskyy and S. Schmidt. Getting feasible variable estimates from infeasible ones: MRF local
polytope study. Technical report, arXiv:1210.4081, 2012.
[15] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schn?orr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler,
J. Lellmann, N. Komodakis, and C. Rother. A comparative study of modern inference techniques for
discrete energy minimization problems. In CVPR, 2013.
[16] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Trans. on
PAMI, 28(10):1568?1583, 2006.
[17] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In NIPS, 2007.
[18] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Trans. on Inf. Theory,, 56(12):6294 ?6316, 2010.
[19] M. I. Schlesinger and K. V. Antoniuk. Diffusion algorithms and structural recognition optimization problems. Cybernetics and Systems Analysis, 47(2):175?192, 2011.
[20] V. Franc, S. Sonnenburg, and T. Werner. Cutting-Plane Methods in Machine Learning, chapter 7, pages
185?218. The MIT Press, Cambridge,USA, 2012.
[21] J. H. Kappes, M. Speth, B. Andres, G. Reinelt, and C. Schn?orr. Globally optimal image partitioning by
multicuts. In EMMCVPR, 2011.
[22] M. Bergtholdt, J. H. Kappes, S. Schmidt, and C. Schn?orr. A study of parts-based object class detection
using complete graphs. IJCV, 87(1-2):93?117, 2010.
[23] M. Sun, M. Telaprolu, H. Lee, and S. Savarese. Efficient and exact MAP-MRF inference using branch
and bound. In AISTATS-2012.
[24] L. Otten and R. Dechter. Anytime AND/OR depth-first search for combinatorial optimization. In Proceedings of the Annual Symposium on Combinatorial Search (SOCS), 2011.
[25] M. C. Cooper, S. de Givry, M. Sanchez, T. Schiex, M. Zytnicki, and T. Werner. Soft arc consistency
revisited. Artificial Intelligence, 174(7-8):449?478, May 2010.
[26] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother.
A comparative study of energy minimization methods for Markov random fields with smoothness-based
priors. IEEE Trans. PAMI., 30:1068?1080, June 2008.
[27] ILOG, Inc. ILOG CPLEX: High-performance software for mathematical programming and optimization.
See http://www.ilog.com/products/cplex/.
[28] B. Andres, T. Beier, and J. H. Kappes. OpenGM: A C++ library for discrete graphical models. ArXiv
e-prints, 2012. Projectpage: http://hci.iwr.uni-heidelberg.de/opengm2/.
[29] I. Kovtun. Partial optimal labeling search for a NP-hard subclass of (max, +) problems. In Proceedings
of the DAGM Symposium, 2003.
[30] J. H. Kappes, M. Speth, G. Reinelt, and C. Schn?orr. Towards efficient and exact MAP-inference for large
scale discrete computer vision problems via combinatorial optimization. In CVPR, 2013.
[31] S. Chopra and M. R. Rao. On the multiway cut polyhedron. Networks, 21(1):51?89, 1991.
[32] P. Swoboda, B. Savchynskyy, J. H. Kappes, and C. Schn?orr. Partial optimality via iterative pruning for
the Potts model. In SSVM, 2013.
[33] D. Sontag, T. Meltzer, A. Globerson, Y. Weiss, and T. Jaakkola. Tightening LP relaxations for MAP using
message-passing. In UAI-2008, pages 503?510.
[34] D. Sontag. C++ code for MAP inference in graphical models.
?dsontag/code/mplp_ver2.tgz.
9
See http://cs.nyu.edu/
| 5159 |@word version:1 manageable:1 stronger:1 norm:2 flach:1 tried:1 grk:1 decomposition:6 inpainting:1 carry:1 n8:3 initial:11 contains:4 efficacy:1 series:1 past:1 existing:1 com:1 givry:1 dechter:1 numerical:1 remove:1 drop:1 greedy:1 selected:1 intelligence:1 plane:9 smith:1 recompute:1 math:1 node:33 provides:3 revisited:1 org:1 mathematical:1 constructed:1 supergraph:1 become:1 symposium:2 consists:4 prove:1 ijcv:1 combine:1 hci:1 theoretically:1 x0:7 pairwise:5 indeed:3 pkdd:1 nor:1 growing:1 brain:1 globally:3 encouraging:2 solver:51 increasing:1 becomes:3 provided:2 underlying:3 notation:3 minimizes:1 finding:1 guarantee:3 temporal:1 rustem:1 subclass:1 growth:1 preferable:1 exactly:1 demonstrates:1 partitioning:1 grant:1 appear:1 branchand:1 local:11 xv:47 limit:1 middlebury:3 mach:1 pami:4 therein:1 eb:7 suggests:1 christoph:1 fastest:1 limited:1 range:1 practical:3 unique:1 globerson:3 speth:2 vu:2 practice:1 area:3 attain:2 significantly:2 word:1 pxv:2 get:2 cannot:3 savchynskyy:7 selection:1 close:2 nb:3 applying:1 restriction:2 equivalent:2 map:12 lagrangian:4 optimize:1 center:1 fruitful:1 straightforward:1 schiex:1 starting:1 independently:2 convex:22 focused:1 splitting:1 shorten:1 subgraphs:7 coordinate:5 play:3 exact:3 programming:12 agreement:2 persistently:1 trend:1 satisfying:1 particularly:1 recognition:1 tappen:1 cut:6 labeled:1 role:3 subproblem:4 solved:2 seg:1 kappes:11 connected:4 sun:1 sonnenburg:1 decrease:2 mentioned:1 nesterov:1 dynamic:2 tight:4 solving:6 segment:2 serve:1 basis:1 chapter:1 kolmogorov:3 reparametrize:1 fast:2 artificial:1 labeling:20 hyper:1 neighborhood:1 quite:8 heuristic:2 larger:3 solve:13 widely:1 cvpr:4 otherwise:1 syntactic:1 transform:1 delivered:1 final:1 exemplary:2 propose:2 product:4 combining:2 subgraph:28 description:2 getting:2 requirement:2 optimum:6 produce:2 comparative:2 object:1 bogdan:2 depending:1 spent:1 fixing:1 progress:1 strong:2 c:1 tziritas:1 differ:1 concentrate:2 direction:1 closely:1 functionality:1 hull:1 meshi:1 require:1 assign:1 preliminary:1 tighter:1 adjusted:1 strictly:5 hold:7 sufficiently:1 considered:2 presumably:2 purpose:1 estimation:1 applicable:1 combinatorial:46 label:13 successfully:1 tool:1 minimization:14 mit:2 clearly:1 always:3 fulfill:1 collaboratory:1 avoid:2 jaakkola:3 focus:1 june:1 potts:12 polyhedron:1 check:1 mainly:2 contrast:2 kim:1 sense:2 inference:9 unary:4 dagm:1 typically:4 unlikely:1 a0:5 diminishing:1 beier:1 koller:1 labelings:8 germany:2 provably:2 pixel:1 arg:2 dual:20 issue:1 overall:2 agarwala:1 development:1 plan:1 smoothing:1 art:1 special:1 constrained:2 mutual:1 marginal:1 field:3 equal:5 once:1 photomontage:2 extraction:1 initialize:1 broad:1 seventy:1 icml:1 constitutes:1 future:3 report:1 np:2 others:1 simplify:1 employ:1 franc:2 penguin:1 modern:2 dfg:1 phase:1 bergtholdt:1 cplex:12 friedman:1 detection:1 message:5 evaluation:4 analyzed:1 hamprecht:1 primal:3 devoted:1 xb:8 bundle:1 integral:4 edge:4 byproduct:1 savchynskyy1:1 nowadays:1 necessary:2 respective:2 partial:4 unless:1 tree:3 iv:1 plugged:1 savarese:1 overcomplete:1 schlesinger:2 increased:1 instance:11 column:7 soft:1 rao:1 formulates:1 werner:4 applicability:1 reparametrized:6 subset:1 veksler:1 too:1 optimally:2 reported:2 varies:1 probabilistic:1 lee:1 together:1 luong:1 return:3 xuv:6 potential:16 de:4 attaining:1 orr:9 sec:1 includes:1 rueckert:1 inc:1 satisfy:2 view:2 lot:1 hazan:1 red:1 portion:1 xing:1 option:1 reparametrization:11 shashua:1 contribution:2 publicly:1 became:1 efficiently:1 yellow:3 generalize:1 weak:1 shekhovtsov:1 andres:3 cybernetics:1 published:1 definition:3 failure:1 energy:19 intentionally:1 pano:2 associated:3 proof:4 unsolved:4 dataset:9 knowledge:1 color:5 anytime:1 segmentation:4 ea:14 trw:8 higher:2 attained:1 originally:1 specify:1 improved:1 wei:1 formulation:1 though:5 shrink:1 just:2 xa:10 until:3 hand:1 propagation:1 gray:1 grows:1 name:2 usa:1 contain:2 concept:1 managed:2 www:1 hence:7 assigned:3 equality:1 read:3 alternating:1 eg:26 reweighted:1 round:1 komodakis:2 during:1 covering:1 coincides:1 criterion:1 complete:1 demonstrate:2 performs:1 delivers:1 image:6 variational:1 novel:4 recently:2 common:4 specialized:7 tightens:1 volume:1 otten:1 belong:1 significant:4 refer:1 cambridge:1 smoothness:1 uv:26 grid:2 consistency:13 trivially:1 multiway:6 specification:1 etc:1 own:1 recent:5 optimizes:1 inf:2 discard:1 certain:3 minimum:1 additional:1 relaxed:1 fortunately:1 mca:2 freely:1 converge:1 july:1 ii:6 branch:3 multiple:1 upside:1 signal:1 technical:2 faster:3 va:10 scalable:3 basic:1 variant:1 mrf:12 vision:3 crop:1 emmcvpr:2 arxiv:2 iteration:11 sometimes:1 remarkably:1 separately:1 decreased:1 else:1 rest:1 strict:7 induced:2 undirected:2 trws:1 sanchez:1 inconsistent:1 ea0:2 call:1 prague:1 jordan:1 structural:1 presence:1 chopra:1 split:1 iii:4 enough:1 easy:1 meltzer:1 gave:1 restrict:1 idea:7 regarding:1 venus:4 reparametrizations:3 tgz:1 passed:1 effort:1 stereo:2 sontag:2 passing:5 afford:1 remark:2 ssvm:1 detailed:1 locally:3 zabih:1 processed:1 reduced:1 http:3 outperform:1 exist:2 correctly:1 blue:1 serving:1 write:1 discrete:5 group:1 neither:1 diffusion:2 graph:26 relaxation:38 fraction:3 year:1 sum:1 run:7 master:2 unsolvable:1 place:4 family:5 almost:1 vb:2 bound:8 pay:1 dash:1 convergent:2 tsukuba:3 annual:1 adapted:1 constraint:1 software:1 speed:2 optimality:11 min:11 relatively:1 martin:1 palm:1 according:1 march:1 mismatching:1 smaller:4 slightly:1 lp:33 island:1 n4:1 restricted:1 fulfilling:1 taken:1 equation:1 mutually:1 previously:2 remains:2 german:1 ilp:5 needed:1 multicuts:1 tractable:1 end:1 serf:1 adopted:1 available:2 apply:3 generic:1 appending:2 anymore:1 alternative:2 schmidt:5 slower:1 original:6 denotes:1 top:1 include:1 cf:1 remaining:2 graphical:9 xw:4 especially:2 classical:1 objective:3 print:1 strategy:3 fulfillment:1 unable:1 separate:1 berlin:1 thank:1 concatenation:1 outer:1 parametrized:1 presolve:5 topic:1 polytope:10 strawberry:1 mplp:6 collected:1 reinelt:2 reason:1 willsky:1 rother:2 code:4 length:1 providing:2 minimizing:1 difficult:4 mostly:1 unfortunately:1 potentially:1 subproblems:1 tightening:3 implementation:3 perform:1 observation:1 ilog:3 markov:3 datasets:10 benchmark:9 finite:1 arc:17 descent:3 teddy:4 ecml:1 truncated:1 grew:2 precise:1 introduced:1 evidenced:1 complement:6 required:2 subvector:1 specified:1 schn:10 pfau:4 narrow:1 czech:1 nip:1 trans:6 able:4 beyond:1 pattern:1 mismatch:1 perception:1 appeared:2 sparsity:1 program:2 x0v:6 green:1 max:5 explanation:1 belief:2 wainwright:2 hot:1 kausler:1 largescale:1 indicator:2 scheme:1 library:1 xg:5 ready:1 review:3 literature:1 acknowledgement:1 prior:1 fully:1 expect:1 vg:31 foundation:1 sufficient:1 consistent:7 principle:2 viewpoint:1 tiny:2 nowozin:1 course:2 summary:1 repeat:2 supported:1 keeping:1 figueiredo:1 infeasible:1 drastically:1 weaker:1 allow:1 szeliski:1 neighbor:1 kibernetika:1 correspondingly:1 sparse:6 boundary:12 overcome:1 depth:1 stand:3 valid:1 author:1 made:1 commonly:1 coincide:4 adaptive:1 cope:1 scharstein:1 approximate:4 pruning:1 uni:3 cutting:9 keep:1 global:5 uai:2 spatio:1 search:10 iterative:1 decade:1 table:5 stimulated:1 learn:1 attainment:3 obtaining:1 heidelberg:7 did:1 aistats:1 main:3 dense:1 constituted:1 border:10 whole:5 paul:1 big:3 noise:1 lellmann:1 repeated:1 xu:20 augmented:1 fig:3 slow:1 cooper:1 shrinking:1 fails:1 paragios:1 exponential:1 lie:1 house:1 opengm:3 theorem:8 remained:1 specific:1 nyu:1 evidence:1 essential:1 iwr:2 mirror:1 magnitude:4 cartesian:1 easier:2 visual:2 lagrange:1 partially:1 applies:1 springer:1 corresponds:3 goal:1 marked:1 exposition:1 towards:1 aguiar:1 labelled:1 price:1 feasible:2 hard:6 infinite:1 typical:1 panorama:1 called:8 total:1 batra:1 duality:1 experimental:2 dsontag:1 select:2 formally:1 mark:1 confines:1 tested:4 phenomenon:1 |
4,597 | 516 | Neural Network Routing for Random Multistage
Interconnection Networks
Mark W. Goudreau
Princeton University
and
NEe Research Institute, Inc.
4 Independence Way
Princeton, NJ 08540
c. Lee Giles
NEC Research Institute, Inc.
4 Independence Way
Princeton, NJ 08540
Abstract
A routing scheme that uses a neural network has been developed that can
aid in establishing point-to-point communication routes through multistage interconnection networks (MINs). The neural network is a network
of the type that was examined by Hopfield (Hopfield, 1984 and 1985).
In this work, the problem of establishing routes through random MINs
(RMINs) in a shared-memory, distributed computing system is addressed.
The performance of the neural network routing scheme is compared to two
more traditional approaches - exhaustive search routing and greedy routing. The results suggest that a neural network router may be competitive
for certain RMIN s.
1
INTRODUCTION
A neural network has been developed that can aid in establishing point-topoint communication routes through multistage interconnection networks (MINs)
(Goudreau and Giles, 1991). Such interconnection networks have been widely studied (Huang, 1984; Siegel, 1990). The routing problem is of great interest due to
its broad applicability. Although the neural network routing scheme can accommodate many types of communication systems, this work concentrates on its use in a
shared-memory, distributed computing system.
Neural networks have sometimes been used to solve certain interconnection network
722
Neural Network Routing for Random Multistage Interconnection Networks
Input
Ports
Output
Ports
Interconnection
Network
Control Bits
-
: Logic1
I
L.: _ _ _ _ _
Neural
Network
_ _ ....
_-_-:--r-_~_
-
:1
Interconnection
Logic2:
Network
I
Controller
I
___'--_-_-_
-_-J:.J
Externa Control
Figure 1: The communication system with a neural network router. The input
ports (processors) are on the left, while the output ports (memory modules) are on
the right.
problems, such as finding legal routes (Brown, 1989; Hakim and Meadows, 1990)
and increasing the throughput of an interconnection network (Brown and Liu, 1990;
Marrakchi and Troudet, 1989). The neural network router that is the subject of
this work, however, differs significantly from these other routers and is specially
designed to handle parallel processing systems that have MINs with random interstage connections. Such random MINs are called RMINs. RMINs tend to have
greater fault-tolerance than regular MINs.
The problem is to allow a set of processors to access a set of memory modules
through the RMIN. A picture of the communication system with the neural network
router is shown in Figure 1. The are m processors and n memory modules. The
system is assumed to be synchronous. At the beginning of a message cycle, some
set of processors may desire to access some set of memory modules. It is the
job of the router to establish as many of these desired connections as possible in
a non-conflicting manner. Obtaining the optimal solution is not critical. Stymied
processors may attempt communication again during the subsequent message cycle.
It is the combination of speed and the quality of the solution that is important.
The object of this work was to discover if the neural network router could be competitive with other types of routers in terms of quality of solution, speed, and resource
723
724
Goudreau and Giles
RMIN2
RMINI
2
1
1
2
3
3
4
5
6
4
5
6
7
8
RMIN3
1
1
2
3
4
1
2
3
2
3
4
4
3
6
7
8
5
7
8
9
10
Figure 2: Three random multistage interconnection networks. The blocks that are
shown are crossbar switches, for which each input may be connected to each output.
utilization. To this end, the neural
other schemes for routing in RMINs
routing. So far, the results of this
router may indeed be a practicable
too large.
2
network routing scheme was compared to two
- namely, exhaustive search routing and greedy
investigation suggest that the neural network
alternative for routing in RMINs that are not
EXHAUSTIVE SEARCH ROUTING
The exhaustive search routing method is optimal in terms of the ability of the router
to find the best solution. There are many ways to implement such a router. One
approach is described here.
For a given interconnection network, every route from each input to each output
was stored in a database. (The RMIN s that were used as test cases in this paper
always had at least one route from each processor to each memory module.) When
a new message cycle began and a new message set was presented to the router,
the router would search through the database for a combination of routes for the
message set that had no conflicts. A conflict was said to occur if more than one
route in the set of routes used a single bus in the interconnection network. In the
case where every combination of routes for the message set had a conflict, the router
would find a combination of routes that could establish the largest possible number
of desired connections.
If there are k possible routes for each message, this algorithm needs a memory of
size 8( mnk) and, in the worst case, takes exponential time with respect to the size
Neural Network Routing for Random Multistage Interconnection Networks
of the message set. Consequently, it is an impractical approach for most RMINs,
but it provides a convenient upper bound for the performance of other routers.
3
GREEDY ROUTING
When greedy routing is applied, message connections are established one at a time.
Once a route is established in a given message cycle, it may not be removed. Greedy
routing does not always provide the optimal routing solution.
The greedy routing algorithm that was used required the same route database as
the exhaustive search router did. However, it selects a combination of routes in
the following manner. When a new message set is present, the router chooses one
desired message and looks at the first route on that message's list of routes. The
router then establishes that route. Next, the router examines a second message
(assuming a second desired message was requested) and sees if one of the routes
in the second message's route list can be established without conflicting with the
already established first message. If such a route does exist, the router establishes
that route and moves on to the next desired message.
In the worst case, the speed of the greedy router is quadratic with respect to the
size of the message set.
4
NEURAL NETWORK ROUTING
The focal point of the neural network router is a neural network of the type that
was examined by Hopfield (Hopfield, 1984 and 1985). The problem of establishing
a set of non-conflicting routes can be reduced to a constraint satisfaction problem.
The structure of the neural network router is completely determined by the RMIN.
When a new set of routes is desired, only certain bias currents in the network change.
The neural network routing scheme also has certain fault-tolerant properties that
will not be described here.
The neural network calculates the routes by converging to a legal routing array. A
legal routing array is 3-dimensional. Therefore, each element of the routing array
will have three indices. If element ai,i,k is equal to 1 then message i is routed
through output port k of stage j. We say ai,;,k and a',m,n are in the same row if
i = I and k = n. They are in the same column if i = I and j = m. Finally, they are
in the same rod if j = m and k = n.
A legal routing array will satisfy the following three constraints:
1. one and only one element in each column is equal to 1.
2. the elements in successive columns that are equal to 1 represent output ports
that can be connected in the interconnection network.
3. no more than one element in each rod is equal to 1.
The first restriction ensures that each message will be routed through one and
only one output port at each stage of the interconnection network. The second
restriction ensures that each message will be routed through a legal path in the
725
726
Goudreau and Giles
interconnection network. The third restriction ensures that any resource contention
in the interconnection network is resolved. In other words, only one message can
use a certain output port at a certain stage in the interconnection network. When
all three of these constraints are met, the routing array will provide a legal route
for each message in the message set.
Like the routing array, the neural network router will naturally have a 3-dimensional
structure. Each ai,j,k of a routing array is represented by the output voltage of a
neuron, V'i,j,k' At the beginning of a message cycle, the neurons have a random
output voltage. If the neural network settles in one of the global minima, the
problem will have been solved.
A continuous time mode network was chosen. It was simulated digitally. The neural
network has N neurons. The input to neuron i is Ui, its input bias current is Ii, and
its output is Vi. The input Ui is converted to the output Vi by a sigmoid function,
g(z). Neuron i influences neuron j by a connection represented by 7ji. Similarly,
neuron j affects neuron i through connection Iij. In order for the Liapunov function
(Equation 5) to be constructed, Iij must equal7ji. We further assume that Iii = O.
For the synchronous updating model, there is also a time constant, denoted by T.
The equations which describe the output of a neuron i are:
duo
LN T... v,. + L?
-'
J
,
dt = --' +
U?
(1)
~
T
.
J=
1
T=RC
(2)
V; = g(Uj)
(3)
1
(4)
g(z) = 1 + e-X
The equations above force the neural net into stable states that are the local minima
of this approximate energy equation
iNN
E = -
2L
N
2: Iij Vi V; - L V'i Ii
i=1j=1
(5)
i=l
For the neural network, the weights (Iii's) are set, as are the bias currents (Ii'S).
It is the output voltages (V'i's) that vary to to minimize E.
Let M be the number of messages in a message set, let S be the number of stages
in the RMIN, and let P be the number of ports per stage (P may be a function
of the stage number). Below are the energy functions that implement the three
constraints discussed above:
A M 8-1 P
P
(6)
E1 = 2'
Vm",p(-Vm",p +
Vm,3,i)
B
E2
2: L 2:
2:
m=1 1=1 p=l
i=1
8-1 P
M
M
= 2' 2: 2: 2: Vm,I,p( - Vm,3,p + L
,=1 p=1 m=1
i=1
V'i,3,p)
(7)
Neural Network Routing for Random Multistage Interconnection Networks
C
Ea =
"2
M
S-l P
P
2: 2: 2:( -2Vm",p + Vm",p(-Vm",p + 2: Vm",i))
m=l
,=1 p=l
f. ~]; tt
+ &,(
M
[S-l P
D
(8)
i=l
P
d(s, p, i)Vm,,-l,p Vm",i
(9)
d( 1, (JIm, j)Vm,IJ + d( S, j, Pm )Vm,S -IJ )]
A, B, C, and D are arbitrary positive constants. l El and Ea handle the first
constraint in the routing array. E4 deals with the second constraint. E2 ensures the
third. From the equation for E4, the function d(sl,pl,p2) represents the "distance"
between output port pI from stage sl - 1 and output port p2 from stage s1. If pI
can connect to p2 through stage sl, then this distance may be set to zero. If pI
and p2 are not connected through stage sl, then the distance may be set to one.
Also, am is the source address of message m, while f3m is the destination address
of message m.
The entire energy function is:
(10)
Solving for the connection and bias current values as shown in Equation 5 results
in the following equations:
(11)
-B031 ,,2 0pl,p2(1 - Oml,m2)
-D8m1,m2[031+1,,2d(s2,pl,p2) + 8,1,,2+1 d(sl,p2,pl)]
=
1m ",p C - D[8"ld(l, am,p) + o"s-ld(S,p,f3m)]
8i,j is a Kronecker delta (8j,j = 1 when i = j, and 0 otherwise).
(12)
Essentially, this approach is promising because the neural network is acting as a
parallel computer. The hope is that the neural network will generate solutions much
faster than conventional approaches for routing in RMINs.
The neural network that is used here has the standard problem - namely, a global
minimum is not always reached. But this is not a serious difficulty. Typically,
when the globally minimal energy is not reached by the neural network, some of
the desired routes will have been calculated while others will not have. Even a
locally minimal solution may partially solve the routing problem. Consequently,
this would seem to be a particularly encouraging type of application for this type
of neural network. For this application, the traditional problem of not reaching
the global minimum may not hurt the system's performance very much, while the
expected speed of the neural network in calculating the solution will be a great
asset.
IFor the simulations, T = 1.0, A
0, and D were chosen empirically.
= 0 = D = 3.0, and B = 6.0.
These values for A, B,
727
728
Goudreau and Giles
Table 1: Routing results for the RMINs shown in Figure 2. The
calculated due to their computational complexity.
RMIN1
M
1
2
3
4
5
6
7
8
RMIN2
* entries were not
RMIN3
Eel
Egr
Enn
Eel
Egr
Enn
Eel
Egr
Enn
1.00
1.86
2.54
3.08
3.53
3.89
4.16
4.33
1.00
1.83
2.48
2.98
3.38
3.67
3.91
4.10
1.00
1.87
2.51
2.98
3.24
3.45
3.66
3.78
1.00
1.97
2.91
3.80
4.65
5.44
6.17
6.86
1.00
1.97
2.91
3.79
4.62
5.39
6.13
6.82
1.00
1.98
2.93
3.80
4.61
5.36
6.13
6.80
1.00
1.99
2.99
3.94
1.00
1.88
2.71
3.49
4.22
4.90
5.52
6.10
1.00
1.94
2.87
3.72
4.54
5.23
5.80
6.06
*
*
*
*
The neural network router uses a large number of neurons. If there are m input
ports, and m output ports for each stage of the RMIN, an upper bound on the
number of neurons needed is m 2 S. Often, however, the number of neurons actually
required is much smaller than this upper bound.
It has been shown empirically that neural networks of the type used here can con-
verge to a solution in essentially constant time. For example, this claim is made for
the neural network described in (Takefuji and Lee, 1991), which is a slight variation
of the model used here.
5
SIMULATION RESULTS
Figure 2 shows three RMINs that were examined. The routing results for the three
routing schemes are shown in Table 1. Eel represents the expected number of
messages to be routed using exhaustive search routing. Egr is for greedy routing
while Enn is for neural network routing. These values are functions of the size
of the message set, M. Only message sets that did not have obvious conflicts
were examined. For example, no message set could have two processors trying to
communicate to the same memory module. The table shows that, for at least these
three RMINs, the three routing schemes produce solutions that are of similar virtue.
In some cases, the neural network router appears to outperform the supposedly
optimal exhaustive search router. That is because the Eel and Egr values were
calculated by testing every message set of size M, while Enn was calculated by
testing 1,000 randomly generated message sets of size M. For the neural network
router to appear to perform best, it must have gotten message sets that were easier
to route than average.
In general, the performance of the neural network router degenerates as the size of
the RMIN increases. It is felt that the neural network router in its present form will
not scale well for large RMINs. This is because other work has shown that large
neural networks of the type used here have difficulty converging to a valid solution
(Hopfield, 1985).
Neural Network Routing for Random Multistage Interconnection Networks
6
CONCLUSIONS
The results show that there is not much difference, in terms of quality of solution, for
the three routing methodologies working on these relatively small sample RMINs.
The exhaustive search approach is clearly not a practical approach since it is too
time consuming. But when considering the asymptotic analyses for these three
methodologies one should keep in mind the performance degradation of the greedy
router and the neural network router as the size of the RMIN increases.
Greedy routing and neural network routing would appear to be valid approaches
for RMINs of moderate size. But since asymptotic analysis has a very limited
significance here, the best way to compare the speeds of these two routing schemes
would be to build actual implementations.
Since the neural network router essentially calculates the routes in parallel, it can
reasonably be hoped that a fast, analog implementation for the neural network
router may find solutions faster than the exhaustive search router and even the
greedy router. Thus, the neural network router may be a viable alternative for
RMIN s that are not too large.
References
Brown, T. X., (1989), "Neural networks for switching," IEEE Commun. Mag., Vol.
27, pp. 72-81, Nov. 1989.
Brown, T. X. and Liu, K. H., (1990), "Neural network design of a banyan network
controller," IEEE J. on Selected Areas of Comm., pp. 1428-1438, Oct. 1990.
Goudreau, M. W. and Giles, C. L., (1991), "Neural network routing for multiple
stage interconnection networks," Proc. IJCNN 91, Vol. II, p. A-885, July 1991.
Hakim, N. Z. and Meadows, H. E., (1990), "A neural network approach to the setup
of the Benes switch," in Infocom 90, pp. 397-402.
Hopfield, J. J., (1984), "Neurons with graded response have collective computational
properties like those of two-state neurons," Proc. Natl. Acad. Sci. USA, Vol. 81,
pp. 3088-3092, May 1984.
Hopfield, J. J ., (1985), "Neural computation on decisions in optimization problems,"
Bioi. Cybern., Vol. 52, pp. 141-152, 1985.
Huang, K. and Briggs, F. A., (1984), Computer Architecture and Parallel Processing,
McGraw-Hill, New York, 1984.
Marrakchi, A. M. and Troudet, T., (1989), "A neural net arbitrator for large crossbar packet-switches," IEEE Trans. on Cire. and Sys., Vol. 36, pp. 1039-1041, July
1989.
Siegel, H. J., (1990), Interconnection Networks for Large Scale Parallel Processing,
McGraw-Hill, New York, 1990.
Takefuji, Y. and Lee, K. C., (1991), "An artificial hysteresis binary neuron: a model
suppressing the oscillatory behaviors of neural dynamics", Biological Cybernetics,
Vol. 64, pp. 353-356, 1991.
729
| 516 |@word simulation:2 accommodate:1 ld:2 liu:2 mag:1 suppressing:1 current:4 router:37 must:2 oml:1 subsequent:1 designed:1 greedy:11 selected:1 liapunov:1 beginning:2 sys:1 provides:1 successive:1 rc:1 constructed:1 viable:1 manner:2 expected:2 indeed:1 behavior:1 globally:1 encouraging:1 actual:1 considering:1 increasing:1 discover:1 duo:1 developed:2 finding:1 nj:2 impractical:1 every:3 control:2 utilization:1 f3m:2 appear:2 positive:1 local:1 switching:1 acad:1 establishing:4 path:1 studied:1 examined:4 limited:1 practical:1 testing:2 block:1 implement:2 differs:1 area:1 significantly:1 convenient:1 word:1 regular:1 suggest:2 influence:1 cybern:1 restriction:3 conventional:1 m2:2 examines:1 array:8 handle:2 variation:1 hurt:1 us:2 element:5 particularly:1 updating:1 database:3 module:6 solved:1 worst:2 ensures:4 cycle:5 connected:3 removed:1 digitally:1 supposedly:1 comm:1 ui:2 complexity:1 multistage:8 dynamic:1 solving:1 completely:1 resolved:1 hopfield:7 represented:2 fast:1 describe:1 artificial:1 exhaustive:9 widely:1 solve:2 say:1 interconnection:22 otherwise:1 ability:1 net:2 inn:1 degenerate:1 produce:1 object:1 ij:2 job:1 p2:7 met:1 concentrate:1 gotten:1 packet:1 routing:47 settle:1 investigation:1 biological:1 pl:4 great:2 claim:1 vary:1 proc:2 largest:1 establishes:2 hope:1 clearly:1 always:3 reaching:1 voltage:3 am:2 el:1 arbitrator:1 entire:1 typically:1 selects:1 denoted:1 equal:4 once:1 represents:2 broad:1 look:1 throughput:1 others:1 serious:1 randomly:1 attempt:1 interest:1 message:37 natl:1 desired:7 minimal:2 column:3 giles:6 applicability:1 entry:1 too:3 stored:1 connect:1 takefuji:2 chooses:1 lee:3 vm:13 destination:1 eel:5 again:1 huang:2 verge:1 converted:1 inc:2 hysteresis:1 satisfy:1 vi:3 infocom:1 reached:2 competitive:2 parallel:5 minimize:1 asset:1 cybernetics:1 enn:5 processor:7 oscillatory:1 energy:4 pp:7 obvious:1 e2:2 naturally:1 con:1 ea:2 actually:1 appears:1 dt:1 methodology:2 response:1 stage:12 crossbar:2 working:1 mode:1 quality:3 usa:1 brown:4 deal:1 during:1 trying:1 hill:2 tt:1 topoint:1 contention:1 began:1 sigmoid:1 ji:1 empirically:2 discussed:1 slight:1 analog:1 ai:3 focal:1 pm:1 similarly:1 had:3 access:2 stable:1 meadow:2 moderate:1 commun:1 route:29 certain:6 binary:1 fault:2 minimum:4 greater:1 egr:5 july:2 ii:4 multiple:1 faster:2 benes:1 e1:1 calculates:2 converging:2 controller:2 essentially:3 sometimes:1 represent:1 addressed:1 source:1 specially:1 subject:1 tend:1 seem:1 iii:2 switch:3 independence:2 affect:1 architecture:1 synchronous:2 rod:2 routed:4 york:2 ifor:1 locally:1 reduced:1 generate:1 sl:5 exist:1 outperform:1 delta:1 per:1 mnk:1 vol:6 communicate:1 decision:1 bit:1 bound:3 quadratic:1 occur:1 ijcnn:1 constraint:6 rmin:9 kronecker:1 felt:1 speed:5 min:6 relatively:1 combination:5 smaller:1 s1:1 practicable:1 legal:6 resource:2 equation:7 bus:1 ln:1 needed:1 mind:1 end:1 briggs:1 alternative:2 calculating:1 uj:1 establish:2 build:1 graded:1 move:1 already:1 cire:1 traditional:2 said:1 distance:3 simulated:1 sci:1 assuming:1 index:1 setup:1 implementation:2 design:1 collective:1 perform:1 upper:3 neuron:15 communication:6 jim:1 arbitrary:1 namely:2 required:2 connection:7 conflict:4 conflicting:3 nee:1 established:4 trans:1 address:2 below:1 memory:9 critical:1 satisfaction:1 difficulty:2 force:1 scheme:9 picture:1 asymptotic:2 port:13 pi:3 row:1 bias:4 allow:1 hakim:2 institute:2 distributed:2 tolerance:1 calculated:4 valid:2 made:1 far:1 approximate:1 nov:1 mcgraw:2 keep:1 global:3 tolerant:1 assumed:1 consuming:1 search:10 continuous:1 table:3 promising:1 reasonably:1 obtaining:1 requested:1 did:2 significance:1 s2:1 siegel:2 aid:2 iij:3 exponential:1 third:2 e4:2 list:2 virtue:1 goudreau:6 nec:1 hoped:1 easier:1 desire:1 partially:1 oct:1 bioi:1 consequently:2 shared:2 change:1 determined:1 acting:1 degradation:1 called:1 mark:1 princeton:3 |
4,598 | 5,160 | First-Order Decomposition Trees
Nima Taghipour
Jesse Davis
Hendrik Blockeel
Department of Computer Science, KU Leuven
Celestijnenlaan 200A, B-3001 Heverlee, Belgium
Abstract
Lifting attempts to speedup probabilistic inference by exploiting symmetries in the
model. Exact lifted inference methods, like their propositional counterparts, work
by recursively decomposing the model and the problem. In the propositional case,
there exist formal structures, such as decomposition trees (dtrees), that represent
such a decomposition and allow us to determine the complexity of inference a priori. However, there is currently no equivalent structure nor analogous complexity
results for lifted inference. In this paper, we introduce FO-dtrees, which upgrade
propositional dtrees to the first-order level. We show how these trees can characterize a lifted inference solution for a probabilistic logical model (in terms of a
sequence of lifted operations), and make a theoretical analysis of the complexity
of lifted inference in terms of the novel notion of lifted width for the tree.
1
Introduction
Probabilistic logical modes (PLMs) combine elements of first-order logic with graphical models to
succinctly model complex, uncertain, structured domains [5]. These domains often involve a large
number of objects, making efficient inference a challenge. To address this, Poole [12] introduced
the concept of lifted probabilistic inference, i.e., inference that exploits the symmetries in the model
to improve efficiency. Various lifted algorithms have been proposed, mainly by lifting propositional
inference algorithms [3, 6, 8, 9, 10, 13, 15, 17, 18, 19, 21, 22]. While the relation between the
propositional algorithms is well studied, we have far less insight into their lifted counterparts.
The performance of propositional inference, such as variable elimination [4, 14] or recursive conditioning [2], is characterized in terms of a corresponding tree decomposition of the model, and their
complexity is measured based on properties of the decomposition, mainly its width. It is known that
standard (propositional) inference has complexity exponential in the treewidth [2, 4]. This allows
us to measure the complexity of various inference algorithms only based on the structure of the
model and its given decomposition. Such analysis is typically done using a secondary structure for
representing the decomposition of graphical models, such as decomposition trees (dtrees) [2].
However, the existing notion of treewidth does not provide a tight upper bound for the complexity of lifted inference, since it ignores the opportunities that lifting exploits to improve efficiency.
Currently, there exists no notion analogous to treewidth for lifted inference to analyze inference
complexity based on the model structure. In this paper, we take a step towards filling these gaps.
Our work centers around a new structure for specifying and analyzing a lifted solution to an inference
problem, and makes the following contributions. First, building on the existing structure of dtrees
for propositional graphical models, we propose the structure of First-Order dtrees (FO-dtrees) for
PLMs. An FO-dtree represents both the decomposition of a PLM and the symmetries that lifting
exploits for performing inference. Second, we show how to determine whether an FO-dtree has
a lifted solution, from its structure alone. Third, we present a method to read a lifted solution (a
sequence of lifted inference operations) from a liftable FO-dtree, just like we can read a propositional
inference solution from a dtree. Fourth, we show how the structure of an FO-dtree determines the
1
complexity of inference using its corresponding solution. We formally analyze the complexity of
lifted inference in terms of the novel, symmetry-aware notion of lifted width for FO-dtrees. As such,
FO-dtrees serve as the first formal tool for finding, evaluating, and choosing among lifted solutions.1
2
Background
We use the term ?variable? in both the logical and probabilistic sense. We use logvar for logical
variables and randvar for random variables. We write variables in uppercase and their values in
lowercase. Applying a substitution ? = {s1 ? t1 , . . . , sn ? tn } to a structure S means replacing
each occurrence of si in S by the corresponding ti . The result is written S?.
2.1
Propositional and first-order graphical models
Probabilistic graphical models such as Bayesian networks, Markov networks and factor graphs compactly represent a joint distribution over a set of randvars V = {V1 , . . . , Vn } by factorizing the distribution into a set of local distribution. Q
For example, factor graphs represent the distribution as a
product of factors: Pr(V1 , . . . , Vn ) = Z1 ?i (Vi ), where ?i is a potential function that maps each
configuration of Vi ? V to a real number and Z is a normalization constant.
Probabilistic logical models use concepts from first-order logic to provide a high-level modeling
language for representing propositional graphical models. While many such languages exist (see [5]
for an overview), we focus on parametric factors (parfactors) [12] that generalize factor graphs.
Parfactors use parametrized randvars (PRVs) to represent entire sets of randvars. For example, the
PRV BloodT ype(X), where X is a logvar, represents one BloodT ype randvar for each object in
the domain of X (written D(X)). Formally, a PRV is of the form P (X)|C where C is a constraint
consisting of a conjunction of inequalities Xi 6= t where t ? D(Xi ) or t ? X. It represents the set
of all randvars P (x) where x ? D(X) and x satisfies C; this set is denoted rv(P (X)|C).
A parfactor uses PRVs to compactly encode a set of factors. For example, the parfactor ?(S moke(X), F riends(X, Y ), S moke(Y )) could encode that friends have similar smoking
habits. It imposes a symmetry in the model by stating that the probability that, among two friends,
both, one or none smoke, is the same for all pairs of friends, in the absence of any other information.
Formally, a parfactor is of the form ?(A)|C, where A = (Ai )ni=1 is a sequence of PRVs, C is a
constraint on the logvars appearing in A, and ? is a potential function. The set of logvars occurring in
A is denoted logvar(A). A grounding substitution maps each logvar to an object from its domain. A
parfactor g represents the set of all factors that can be obtained by applying a grounding substitution
to g that is consistent with C; this set is called the grounding of g, and is denoted gr(g). A parfactor
model is a set G of parfactors. It compactly defines a factor graph gr(G) = {gr(g)|g ? G}.
Following the literature, we assume that the model is in a normal form, such that (i) each pair of
logvars have either identical or disjoint domains, and (ii) for each pair of co-domain logvars X, X 0
in a parfactor ?(A)|C, (X 6= X 0 ) ? C. Every model can be written into this form in poly time [13].
2.2
Inference
A typical inference task is to compute the marginal probability
P ofQsome variables by summing out the
remaining variables, which can be written as: Pr(V 0 ) = V\V 0 i ?i (Vi ). This is an instance of the
P
general sum-product problem [1]. Abusing notation, we write this sum of products as V\V 0 M (V).
Inference by recursive decomposition. Inference algorithms exploit the factorization of the model
to recursively decompose the original problem into smaller, independent subproblems. This is
achieved by a decomposition of the sum-product, according to a simple decomposition rule.
P
Definition 1 (The decomposition rule) Let P be a sum-product computation P : V M (V), and
let M = {M1 (V1 ), . . . Mk (Vk )} be a partitioning (decomposition) of M (V). Then, the decomposi1
Similarly to existing studies on propositional inference [2, 4], our analysis only considers the model?s
global structure, and makes no assumptions about its local structure.
2
A [?]
B1
?1
?
A
?4
?3 (A, B3 )
B2
?3
[B1 , B2 ]
[A, B1 ]
?1 (A, B1 ) ?? (B1 , B2 )
B3
B4 , [A]
B3 , [A]
B1 , B2 , [A]
?2
B4
?A
?
(a)
?4 (A, B4 )
?B3
?B1 ,B2
?3 (A, B3 )
?B4
?4 (A, B4 )
[A, B2 ]
?2 (A, B2 )
?1 (A, B1 ) ?? (B1 , B2 ) ?2 (A, B2 )
(b)
(c)
Figure 1: (a) a factor graph model; (b) a dtree for the model, with its node clusters shown as
cutset, [context]; (c) the corresponding factorization of the sum-product computations.
tion of P, w.r.t. M is an equivalent sum-product formula PM , defined as follows:
Xh X
X
i
PM :
Mk (Vk )
M1 (V1 ) . . .
where V 0 =
S
V0
i,j
V10
Vk0
Vi ? Vj , and Vi0 = Vi \ V 0 .
Most exact inference algorithms recursively apply this rule and compute the final result using topdown or bottom-up dynamic programming [1, 2, 4]. The complexity is then exponential only in
the size of the largest sub-problem solved. Variable elimination (VE) is a bottom-up
P algorithm that
computes the nested sum-product by repeatedly solving an innermost problem V M (V, V 0 ) to
eliminate V from the model. At each step, VE eliminates a randvar V from the model by multiplying
the factors in M (V, V 0 ) into one and summing-out V from the resulting factor.
Decomposition trees. A single inference problem typically has multiple solutions, each with a
different complexity. A decomposition tree (dtree) is a structure that represents the decomposition
used by a specific solution and allows us to determine its complexity [2]. Formally, a dtree is a
rooted tree in which each leaf represents a factor in the model.2 Each node in the tree represents a
decomposition of the model into the models under its child subtrees. Properties of the nodes can be
used to determine the complexity of inference. Child(T ) refers to T ?s child nodes; rv(T ) refers to
the randvars under T , which are those in its factor if T is a leaf and rv(T ) = ?T 0 ?Child(T ) rv(T 0 )
otherwise. Using these, the important properties of cutset, context, and cluster are defined as follows:
? cutset(T ) = ?{T1 ,T2 }?child(T ) rv(T1 ) ? rv(T2 ) \ acutset(T ), where acutset(T ) is the
union of cutsets associated with ancestors of T .
? context(T ) = rv(T ) ? acutset(T )
? cluster(T ) = rv(T ), if T is a leaf; otherwise cluster(T ) = cutset(T ) ? context(T )
Figure 1 shows a factor graph model, a dtree for it with its clusters, and the corresponding sumproduct factorization. Intuitively, the properties of dtree nodes help us analyze the size of subproblems solved during inference. In short, the time complexity of inference is O(n exp(w)) where n is
the size (number of nodes) of the tree and w is its width, i.e., its maximal cluster size minus one.
3
Lifted inference: Exploiting symmetries
The inference approach of Section 2.2 ignores the symmetries imposed by a PLM. Lifted inference
aims at exploiting symmetries among a model?s isomorphic parts. Two constructs are isomorphic if
there is a structure preserving bijection between their components. As PLMs make assertions about
whole groups of objects, they contain many isomorphisms, established by a bijection at the level
of objects. Building on this, symmetries arise between constructs at different levels [11], such as
between: randvars, value assignments to randvars, factors, models, or even sum-product problems.
All exact lifted inference methods use two main tools for exploiting symmetries, i.e., for lifting:
1. Divide the problem into isomorphic subproblems, solve one instance, and aggregate
2. Count the number of isomorphic configurations for a group of interchangeable variables
instead of enumerating all possible configurations.
2
We use a slightly modified definition for dtrees, which were originally defined as full binary rooted trees.
3
F (a, b)
?
F (b, a)
F (a, c)
?
F (c, d)
F (c, a)
?
F (d, c)
...
?
?
?
Figure 2: Isomorphic decomposition of a model. Dashed boxes indicate the partitioning into groups.
Below, we show how these tools are used by lifted variable elimination (LVE) [3, 10, 12, 17, 18].
Isomorphic decomposition: exploiting symmetry among subproblems. The first lifting tool
identifies cases where the application of the decomposition rule results in a product of isomorphic
sum-product problems. Since such problems all have isomorphic answers, we can solve one problem and reuse its result for all the others. In LVE, this corresponds to lifted elimination, which uses
the operations of lifted multiplication and lifted sum-out on parfactors to evaluate a single representative problem. Afterwards, LVE also attempts to aggregate the result (compute their product) by
taking advantage of their isomorphism. For instance, when the results are identical, LVE computes
their product simply by exponentiating the result of one problem.
Example 1. Figure 2 shows the model defined by ?(F (X, Y ), F (Y, X))|X 6= Y , with D(X) =
D(Y ) = {a, b, c, d}. The model asserts that the friendship relationship (F ) is likely to be symmetric.
To sum-out the randvars F using the decomposition rule, we partition the ground factors into six
groups of the form {?(F (x, y), F (y, x)), ?(F (y, x), F (x, y))}, i.e., one group for each 2-subset
{x, y} ? {a, b, c, d}. Since no randvars are
Pshared between the groups, this decomposes the problem
into the product of six isomorphic sums F (x,y),F (y,x) ?(F (x, y), F (y, x)) ? ?(F (y, x), F (x, y)).
All six sums have the same result c (a scalar). Thus, LVE computes c only once (lifted elimination)
and computes the final result by exponentiation as c6 (lifted aggregation).
Counting: exploiting interchangeability among randvars. Whereas isomorphic decomposition
exploits symmetry among problems, counting exploits symmetries within a problem, by identifying interchangeable randvars. A group of (k-tuples of) randvars are interchangeable, if permuting the assignment
P of values to the group results in an equivalent model. Consider a sumproduct subproblem V M (V, V 0 ) that contains a set of n interchangeable (k-tuples of) randvars
V = {(Vi1 , Vi2 , . . . Vik )}ni=1 . The interchangeability allows us to rewrite V into a single counting
randvar #[V], whose value is the histogram h = {(v1 , n1 ), . . . , (vr , nr )}, where ni is the number
of tuples with joint state vi . This allows us to replace a sum over all P
possible joint states of V with
m
a sum over the histograms for #[V]. That is, we compute M (V 0 ) = i=1 M UL(hi ) ? M (hi , V 0 ),
where M UL(hi ) denotes the number of assignments to V that yield the same histogram hi for #[V].
Since the number of histograms is O(nexp(k) ), when n k, we gain exponential savings over
enumerating all the possible joint assignments, whose number is O(exp(nk )). This lifting tool is
employed in LVE by counting conversion, which rewrites the model in terms of counting randvars.
Example
2. Consider the model defined by the parfactor ?(S(X), S(Y ))|X 6= Y , which is
Q
?(S(x
i ), S(xj )). The group of randvars {S(x1 ), . . . , S(xn )} are interchangeable here, since
i6=j
under any value assignment where nt randvars are true and nf randvars are f alse, the model evaluates to the same value ?0 (nt , nf ) = ?(t, t)nt .(nt ?1) ? ?(t, f )nt .nf ? ?(f, t)nf .nt ? ?(f, f )nf .(nf ?1) .
By counting conversion, LVE rewrites this model into ?0 (#X [S(X)]).
4
First-order decomposition trees
In this section, we propose the structure of FO-dtrees, which compactly represent a recursive decomposition for a PLM and the symmetries therein.
4.1
Structure
An FO-dtree provides a compact representation of a propositional dtree, just like a PLM is a compact
representation of a propositional model. It does so by explicitly capturing isomorphic decomposition, which in a dtree correspond to a node with isomorphic children. Using a novel node type,
called a decomposition into partial groundings (DPG) node, an FO-dtree represents the entire set of
4
TX
TX
Tx1
Txn
... ...
?1 (P (x1 )) ?2 (A, P (x1 ))
?1 (P (xn ))
?2 (A, P (xn ))
?
TXY
?x
Tx
?{x, y}
y ?= x
Txy
?1 (P (x))
?2 (A, P (x))
?(F (y, x), F (x, y))
?(F (x, y), F (y, x))
(a)
(b)
Figure 3: (a) dtree (left) and FO-dtree (right) of Example 3; (b) FO-dtree of Example 1
isomorphic child subtrees with a single representative subtree. To formally introduce the structure,
we first show how a PLM can be decomposed into isomorphic parts by DPG.
DPG of a parfactor model. The DPG of a parfactor g is defined w.r.t. a k-subset X =
{X1 , . . . , Xk } of its logvars that all have the same domain DX . For example, the decomposition
used in Example 1, and shown in Figure 2, is the DPG of ?(F (X, Y ), F (Y, X))|X 6= Y w.r.t. log
vars {X, Y }. Formally, DP G(g, X) partitions the model defined by g into |DkX | parts: one part Gx
for each k-subset x = {x1 , . . . , xk } of the objects in DX . Each Gx in turn contains all k! (partial)
groundings of g that can result from replacing (X1 , . . . , Xk ) with a permutation of (x1 , . . . , xk ).
The key intuition behind DPG is that for any x, x0 ?k DX , Gx is isomorphic to Gx0 , since any
bijection from x to x0 yields a bijection from Gx to Gx0 .
DP G can be applied to a whole model G = {gi }m
i=1 , if G?s logvars are (re-)named such that (i)
only co-domain logvars share the same name, and (ii) logvars X appear in all parfactors.
Example 3. Consider G = {?1 (P (X)), ?2 (A, P (X))}. DP G(G, {X}) = {Gi }ni=1 , where each
group Gi = {?1 (P (xi )), ?2 (A, P (xi ))} is a grounding of G (w.r.t. X).
FO-dtrees simply add to dtrees special nodes for representing DPGs in parfactor models.
Definition 2 (DPG node) A DPG node TX is a triplet (X, x, C), where X = {X1 , . . . Xk } is a set
of logvars with the same domain DX , x = {x1 , . . . , xk } is a set of representative objects, and C is
a constraint, such that for all i 6= j: xi 6= xj ? C. We denote this node as ?x : C in the tree.
A representative object is simply a placeholder for a domain object.3 The idea behind our FO-dtrees
is to use TX to graphically indicate a DP G(G, X). For this, each TX has a single child distinguished
as Tx , under which the model is a representative instance of the isomorphic models Gx in the DPG.
Definition 3 (FO-dtree) An FO-dtree is a rooted tree in which
1. non-leaf nodes may be DPG nodes
2. each leaf contains a factor (possibly with representative objects)
3. each leaf with a representative object x is the descendent of exactly one DPG node TX =
(X, x, C), such that x ? x
4. each leaf that is a descendent of TX has all the representative objects x, and
5. for each TX with X = {X1 , . . . , Xk }, Tx has k! children {Ti }k!
i=1 , which are isomorphic
up to a permutation of the representative objects x.
Semantics. Each FO-dtree defines a dtree, which can be constructed by recursively grounding
0
with |DkX | children
its DPG nodes. Grounding a DPG node TX yields a (regular) node TX
0
0
0
0
{Tx?x |x ?k DX }, where Tx?x is the result of replacing x with objects x in Tx .
Example 4. Figure 3 (a) shows the dtree of Example 3 and its corresponding FO-dtree, which only
has one instance Tx of all isomorphic subtrees Txi . Figure 3 (b) shows the FO-dtree for Example 1.
3
As such, it plays the same role as a logvar. However, we use both to distinguish between a whole group of
randvars (a PRV P (X)), and a representative of this group (a representative randvar P (x)).
5
4.2
Properties
Darwiche [2] showed that important properties of a recursive decomposition are captured in the
properties of dtree nodes. In this section, we define these properties for FO-dtrees. Adapting the definitions of the dtree properties, such as cutset, context, and cluster, for FO-dtrees requires accounting
for the semantics of an FO-dtree, which uses DPG nodes and representative objects. More specifically, this requires making the following two modifications (i) use a function Child? (T ), instead
of Child(T ), to take into account the semantics of DPG nodes, and (ii) use a function ?? that finds
the intersection of two sets of representative randvars. First, for a DPG node TX = (X, x, C), we
define: Child? (TX ) = {Tx?x0 |x0 ?k DX }. Second, for two sets A = {ai }ni=1 and B = {bi }ni=1
of (representative) randvars we define: A ?? B = {ai |?? ? ? : ai ? ? B}, with ? the set of
grounding substitutions to their representative objects. Naturally, this provides a basis to define a
?\? ? operator as : A \? B = A \ (A ?? B).
All the properties of an FO-dtree are defined based on their corresponding definitions for dtrees, by
replacing Child, ?, \ with Child? , ?? , \? . Interestingly, all the properties can be computed without
?1
grounding the model, e.g., for a DPG node TX , we can compute rv(TX ) simply as rv(Tx )?X
, with
?1
4
?X = {x ? X}. Figure 4 shows examples of FO-dtrees with their node clusters.
?[ ? ]
TX
? ?
? ?
Txy
?(F (x, y), F (y, x))
?x
P (x)[ ? ]
?
?
? F (x, Y ), F (Y, x)|Y ?= X
?{x, y}
y ?= x
TXY
? ?
F (y, x), F (x, y) ?
?(F (y, x), F (x, y))
?1 (P (x))
Tx
?y
?y
y ?= x
Q(x, y) [ P (x) ]
?(F (x, y), F (y, x))
?3 (P (x), Q(x, y))
TY
TX
?x
F (X, Y )|X ?= Y
? [ P (x) ]
TY
Ty
?2 (Q(x, y))
Figure 4: Three FO-dtree with their clusters (shown as cutset, [context]).
Counted FO-dtrees. FO-dtrees capture the first lifting tool, isomorphic decomposition, explicitly
in DPG nodes. The second tool, counting, can be simply captured by rewriting interchangeable
randvars in clusters of the tree nodes with counting randvars. This can be done in FO-dtrees similarly
to the operation of counting conversion on logvars in LVE. We call such a tree a counted FO-dtree.
Figure 5(a) shows an FO-dtree (left) and its counted version (right).
?x
D(Y ), [?]
S(x), [D(Y )]
?x
?
?
S(x), #Y [D(Y )]
Counting on Y
???????????
?y
y=
? x
?y
y=
? x
F (x, y)[S(x), D(y)]
?(S(x), F (x, y), D(y))
#Y [D(Y )], [?]
F (x, y)[S(x), D(y)]
(a)
?(S(x), F (x, y), D(y))
AGG(X) , ?#Y [D(Y )]
#Y , ?S(X)
?F (X,Y )
(b)
Figure 5: (a) an FO-dtree (left) and its counted version (right); (b) lifted operations of each node.
5
Liftable FO-dtrees
When inference can be performed using the lifted operations (i.e., without grounding the model),
it runs in polynomial time in the domain size of logvars. Formally, this is called a domain-lifted
inference solution [19]. Not all FO-dtrees have a lifted solution, which is easy to see since not
4
The only non-trivial property is cutset of DPG nodes. We can show that cutset(TX ) excludes from
rv(TX ) \ acutset(TX ) only those PRVs for which X is a root binding class of logvars [8, 19].
6
all models are liftable [7], though each model has at least one FO-dtree.5 Fortunately, we can
structurally identify the FO-dtrees for which we know a lifted solution.
What models can the lifting tools handle? Lifted inference identifies isomorphic problems and
solves only one instance of those. Similar to propositional inference, for a lifted method the difficulty
of each sub-problem increases with the number of variables in the problem? those that appear in
the clusters of FO-dtree nodes. When each problem has a bounded (domain-independent) number
of those, the complexity of inference is clearly independent of the domain size. However, a subproblem can involve a large group of randvars? when there is a PRV in the cluster. While traditional
inference is then intractable, lifting may be able to exploit the interchangeability among the randvars
and reduce the complexity by counting. Thus, whether a problem has a lifted solution boils down
to whether we can rewrite it such that it only contains a bounded (domain-independent) number
of counting randvars and ground randvars. This requires the problem to have enough symmetries
in it such that all the randvars V = V1 , . . . Vn in each cluster can be divided into k groups of
interchangeable (tuples of) randvars V1 , V2 , . . . , Vk , where k is independent of the domain size.
Theorem 1 A (non-counted) FO-dtree has a lifted inference solution if its clusters only consist of
(representative) randvars and 1-logvar PRVs. We call such an FO-dtree a liftable tree.6
Proof sketch. Such a tree has a corresponding LVE solution: (i) each sub-problem that we need to
solve in such a tree can be formulated as a (sum-out) problem on a model consisting of a parfactor with 1-logvar PRVs, and (ii) we can count-convert all the logvars in a parfactor with 1-logvar
PRVs [10, 16], to rewrite all the PRVs into a (bounded) number of counting randvars.7
6
Lifted inference based on FO-dtrees
A dtree can prescribe the operations performed by propositional inference, such as VE [2]. In this
section, we show how a liftable FO-dtree can prescribe an LVE solution for the model, thus providing
the first formal method for symbolic operation selection in lifted inference.
In VE, each inference procedure can be characterized based on its elimination order. Darwiche [2]
shows how we can read a (partial) elimination order from a dtree (by assigning elimination of each
randvar to some tree node). We build on this result to read an LVE solution from a (non-counted)
FO-dtree. For this, we assign to each node a set of lifted operations, including lifted elimination of
PRVs (using multiplication and sum-out), and counting conversion and aggregation of logvars:
P
?
V : A PRV V is eliminated at a node T , if V ? cluster(T ) \ context(T ).
? AGG(X): A logvar X is aggregated at a DPG node TX = (X, x, C), if (i) X ? X, and
(ii) X ?
/ logvar(cluster(TX )).
? #X : A logvar X is counted at TX , if (i) X ? X, and (ii) X ? logvar(cluster(TX )).
A lifted solution can be characterized by a sequence of these operations. For this we simply need to
order the operations according to two rules:
1. If node T2 is a descendent of T1 , and OPi is performed at Ti , then OP2 ? OP1 .
2. For operations at the same node, aggregation and counting precede elimination.
Example
5. From the FO-dtree
shown in Figure P
5 (a) we can read the following order of operations:
P
P
F (X, Y ) ? #Y ?
S(X) ? AGG(X) ? #Y [D(Y )], see Figure 5 (b).
7
Complexity of lifted inference
In this section, we show how to compute the complexity of lifted inference based on an FO-dtree.
Just as the complexity of ground inference for a dtree is parametrized in terms of the tree?s width,
we define a lifted width for FO-dtrees and use it to parametrize the complexity of lifted inference.
5
A basic algorithm for constructing an FO-dtree for a PLM is presented in the supplementary material.
Note that this only restricts the number of logvars in PRVs appearing in an FO-dtree?s clusters, not PRVs
in the PLM. For instance, all the liftable trees in this paper correspond to PLMs containing 2-logvar PRVs.
7
For a more detailed proof, see the supplementary material.
6
7
To analyze the complexity, it suffices to compute the complexity of the operations performed at each
node. Similar to standard inference, this depends on the randvars involved in the node?s cluster: for
each lifted operation at a node T , LVE manipulates a factor involving the randvars in cluster(T ),
and thus has complexity proportional to O(|range(cluster(T ))|), where range denotes the set of
possible (joint) values that the randvars can take on. However, unlike in standard inference, this
complexity need not be exponential in |rv(cluster(T ))|, since the clusters can contain counting
randvars that allow us to handle interchangeable randvars more efficiently. To accommodate this
in our analysis, we define two widths for a cluster: a ground width wg , which is the number of
ground randvars in the cluster, and a counting width, w# , which is the number of counting randvars
in it. The cornerstone of our analysis is that the complexity of an operation performed at node T is
exponential only in wg , and polynomial in the domain size with degree w# . We can thus compute
the complexity of the entire inference process, by considering the hardest of these operations, and
the number of operations performed. We do so by defining a lifted width for the tree.
Definition 4 (Lifted width) The lifted width of an FO-dtree T is a pair (wg , w# ), where wg is the
largest ground width among the clusters of T and and w# is the largest counting width among them.
Theorem 2 The complexity of lifted variable elimination for a counted liftable FO-dtree T is:
(w ?r# )
O(nT ? log n ? exp(wg ) ? n# #
),
where nT is the number of nodes in T , (wg , w# ) is its lifted width, n (resp., n# ) is the the largest
domain size among its logvars (resp., counted logvars), and r# is the largest range size among its
tuples of counted randvars.
Proof sketch. We can prove the theorem by showing that (i) the largest range size among clusters,
and thus the largest factor constructed by LVE, is O(exp(wg ) ? n(w# ?r# ) ), (ii) in case of aggregation
or counting conversion, each entry of the factor is exponentiated, with complexity O(log n), and
(iii) there are at most nT operations. (For a more detailed proof, see the supplementary material.)
Comparison to ground inference. To understand the savings achieved by lifting, it is useful to
compare the above complexity to that of standard VE on the corresponding dtree, i.e., using the
same decomposition. The complexity of ground VE is: O(nG ? exp(wg ) ? exp(n# .w# )), where nG
is the size of the corresponding propositional dtree. Two important observations are:
1. The number of ground operations is linear in the dtree?s size nG , instead of the FO-dtree?s
size nT (which is polynomially smaller than nG due to DPG nodes). Roughly speaking,
lifting allows us to perform nT /nG of the ground operations by isomorphic decomposition.
w
2. Ground VE, has a factor exp(n# .w# ) in its complexity, instead of n## for lifted inference.
The latter is typically exponentially smaller. These speedups, achieved by counting, are the
most significant for lifted inference, and what allows it to tackle high treewidth models.
8
Conclusion
We proposed FO-dtrees, a tool for representing a recursive decomposition of PLMs. An FO-dtree
explicitly shows the symmetry between its isomorphic parts, and can thus show a form of decomposition that lifted inference methods employ. We showed how to decide whether an FO-dtree is
liftable (has a corresponding lifted solution), and how to derive the sequence of lifted operations
and the complexity of LVE based on such a tree. While we focused on LVE, our analysis is also
applicable to lifted search-based methods, such as lifted recursive conditioning [13], weighted firstorder model counting [21], and probabilistic theorem proving [6]. This allows us to derive an order
of operations and complexity results for these methods, when operating based on an FO-dtree. Further, we can show the close connection between LVE and search-based methods, by analyzing their
performance based on the same FO-dtree. FO-dtrees are also useful to approximate lifted inference
algorithms, such as lifted blocked Gibbs sampling [22] and RCR [20], that attempt to improve their
inference accuracy by identifying liftable subproblems and handling them by exact inference.
Acknowledgements
This research is supported by the Research Fund K.U.Leuven (GOA 08/008, CREA/11/015 and
OT/11/051), and FWO-Vlaanderen (G.0356.12).
8
References
[1] F. Bacchus, S. Dalmao, and T. Pitassi. Solving #-SAT and Bayesian inference with backtracking search.
Journal of Artificial Intelligence Research, 34(2):391, 2009.
[2] Adnan Darwiche. Recursive conditioning. Artif. Intell., 126(1-2):5?41, 2001.
[3] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), pages 1319?1325,
2005.
[4] Rina Dechter. Bucket elimination: A unifying framework for reasoning. Artif. Intell., 113(1-2):41?85,
1999.
[5] Lise Getoor and Ben Taskar, editors. An Introduction to Statistical Relational Learning. MIT Press, 2007.
[6] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence (UAI), pages 256?265, 2011.
[7] Manfred Jaeger and Guy Van den Broeck. Liftability of probabilistic inference: Upper and lower bounds.
In Proceedings of the 2nd International Workshop on Statistical Relational AI (StaRAI), 2012.
[8] Abhay Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted inference seen from the other side :
The tractable features. In Proceedings of the 23rd Annual Conference on Neural Information Processing
Systems (NIPS), pages 973?981. 2010.
[9] Kristian Kersting, Babak Ahmadi, and Sriraam Natarajan. Counting belief propagation. In Proceedings
of the 25th Conference on Uncertainty in Artificial Intelligence (UAI), pages 277?284, 2009.
[10] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted
probabilistic inference with counting formulas. In Proceedings of the 23rd AAAI Conference on Artificial
Intelligence (AAAI), pages 1062?1608, 2008.
[11] Mathias Niepert. Markov chains on orbits of permutation groups. In Proceedings of the 28th Conference
on Uncertainty in Artificial Intelligence (UAI), pages 624?633, 2012.
[12] David Poole. First-order probabilistic inference. In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), pages 985?991, 2003.
[13] David Poole, Fahiem Bacchus, and Jacek Kisynski. Towards completely lifted search-based probabilistic
inference. CoRR, abs/1107.4035, 2011.
[14] David Poole and Nevin Lianwen Zhang. Exploiting contextual independence in probabilistic inference.
J. Artif. Intell. Res. (JAIR), 18:263?313, 2003.
[15] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In Proceedings of the 23rd AAAI
Conference on Artificial Intelligence (AAAI), pages 1094?1099, 2008.
[16] Nima Taghipour and Jesse Davis. Generalized counting for lifted variable elimination. In Proceedings of
the 2nd International Workshop on Statistical Relational AI (StaRAI), 2012.
[17] Nima Taghipour, Daan Fierens, Jesse Davis, and Hendrik Blockeel. Lifted variable elimination with
arbitrary constraints. In Proceedings of the 15th International Conference on Artificial Intelligence and
Statistics (AISTATS), pages 1194?1202, 2012.
[18] Nima Taghipour, Daan Fierens, Guy Van den Broeck, Jesse Davis, and Hendrik Blockeel. Completeness
results for lifted variable elimination. In Proceedings of the 16th International Conference on Artificial
Intelligence and Statistics (AISTATS), 2013.
[19] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic
inference. In Proceedings of the 24th Annual Conference on Advances in Neural Information Processing
Systems (NIPS), pages 1386?1394, 2011.
[20] Guy Van den Broeck, Arthur Choi, and Adnan Darwiche. Lifted relax, compensate and then recover:
From approximate to exact lifted probabilistic inference. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence (UAI), pages 131?141, 2012.
[21] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted probabilistic inference by first-order knowledge compilation. In Proceedings of the 22nd International Joint
Conference on Artificial Intelligence (IJCAI), pages 2178?2185, 2011.
[22] Deepak Venugopal and Vibhav Gogate. On lifting the gibbs sampling algorithm. In Proceedings of the
26th Annual Conference on Advances in Neural Information Processing Systems (NIPS), pages 1?6, 2012.
9
| 5160 |@word version:2 polynomial:2 dtrees:28 vi1:1 nd:3 adnan:2 decomposition:34 accounting:1 innermost:1 minus:1 accommodate:1 recursively:4 substitution:4 configuration:3 contains:4 interestingly:1 existing:3 contextual:1 nt:11 si:1 assigning:1 dx:6 written:4 dechter:1 partition:2 plm:7 fund:1 alone:1 intelligence:12 leaf:7 braz:1 amir:1 xk:7 short:1 manfred:1 provides:2 completeness:2 node:42 bijection:4 gx:5 c6:1 zhang:1 constructed:2 prove:1 combine:1 dan:2 darwiche:4 introduce:2 x0:4 roughly:1 nor:1 decomposed:1 dkx:2 considering:1 parfactors:5 notation:1 bounded:3 what:2 finding:1 every:1 nf:6 ti:3 firstorder:1 tackle:1 exactly:1 cutset:8 partitioning:2 appear:2 t1:4 local:2 analyzing:2 blockeel:3 therein:1 studied:1 specifying:1 luke:1 co:2 factorization:3 bi:1 range:4 sriraam:1 recursive:7 union:1 procedure:1 habit:1 adapting:1 refers:2 regular:1 symbolic:1 close:1 selection:1 operator:1 context:7 applying:2 milch:1 equivalent:3 map:2 imposed:1 center:1 roth:1 jesse:5 graphically:1 txy:4 focused:1 identifying:2 manipulates:1 insight:1 rule:6 proving:2 handle:2 notion:4 analogous:2 resp:2 play:1 exact:5 programming:1 us:3 prescribe:2 domingo:2 element:1 natarajan:1 bottom:2 role:1 subproblem:2 taskar:1 solved:2 capture:1 rina:1 nevin:1 intuition:1 meert:1 complexity:34 dynamic:1 babak:1 tight:1 solving:2 interchangeable:8 rewrite:5 serve:1 efficiency:2 basis:1 completely:1 compactly:4 joint:8 various:2 tx:33 artificial:12 aggregate:2 choosing:1 whose:2 supplementary:3 solve:3 relax:1 otherwise:2 wg:8 statistic:2 gi:3 final:2 sequence:5 advantage:1 propose:2 product:14 maximal:1 asserts:1 exploiting:7 ijcai:3 cluster:27 jaeger:1 ben:1 object:16 help:1 gx0:2 friend:3 stating:1 v10:1 derive:2 measured:1 solves:1 treewidth:4 indicate:2 elimination:15 material:3 assign:1 parag:1 suffices:1 decompose:1 brian:1 around:1 ground:11 normal:1 exp:7 belgium:1 precede:1 applicable:1 currently:2 singla:1 largest:7 tool:9 weighted:1 mit:1 clearly:1 aim:1 modified:1 lifted:72 kersting:2 conjunction:1 encode:2 lise:1 focus:1 vk:3 mainly:2 sense:1 inference:72 lowercase:1 typically:3 entire:3 eliminate:1 relation:1 ancestor:1 semantics:3 among:12 denoted:3 priori:1 special:1 marginal:1 aware:1 construct:2 once:1 saving:2 sampling:2 eliminated:1 identical:2 represents:8 ng:5 hardest:1 filling:1 t2:3 others:1 employ:1 ve:7 intell:3 consisting:2 n1:1 attempt:3 ab:1 rcr:1 uppercase:1 permuting:1 behind:2 suciu:1 compilation:2 chain:1 subtrees:3 partial:3 arthur:1 vi0:1 tree:25 divide:1 re:2 orbit:1 theoretical:1 uncertain:1 mk:2 instance:7 modeling:1 assertion:1 raedt:1 assignment:5 leslie:1 kaelbling:1 subset:3 entry:1 gr:3 bacchus:2 characterize:1 answer:1 dalmao:1 broeck:5 international:7 parfactor:12 probabilistic:18 meliou:1 michael:1 aaai:4 containing:1 possibly:1 guy:5 account:1 potential:2 op1:1 de:2 b2:9 jha:1 descendent:3 explicitly:3 vi:6 depends:1 tion:1 performed:6 root:1 eyal:1 analyze:4 aggregation:4 recover:1 contribution:1 ni:6 accuracy:1 efficiently:1 prvs:12 yield:3 correspond:2 identify:1 upgrade:1 generalize:1 bayesian:2 none:1 multiplying:1 fo:56 definition:7 evaluates:1 ty:3 involved:1 naturally:1 associated:1 proof:4 boil:1 gain:1 logical:5 knowledge:2 liftable:9 originally:1 jair:1 logvar:13 interchangeability:3 done:2 box:1 though:1 niepert:1 just:3 sketch:2 replacing:4 smoke:1 propagation:2 abusing:1 defines:2 mode:1 vibhav:3 artif:3 alexandra:1 grounding:11 building:2 b3:5 concept:2 contain:2 true:1 counterpart:2 name:1 read:5 symmetric:1 during:1 width:15 davis:5 rooted:3 generalized:1 txi:1 tn:1 goa:1 jacek:1 reasoning:1 kisynski:1 novel:3 overview:1 conditioning:3 b4:5 exponentially:1 m1:2 significant:1 blocked:1 gibbs:2 ai:6 leuven:2 rd:3 riends:1 similarly:2 pm:2 i6:1 language:2 nexp:1 operating:1 v0:1 pitassi:1 add:1 showed:2 dpg:21 inequality:1 binary:1 preserving:1 captured:2 fortunately:1 seen:1 employed:1 op2:1 determine:4 aggregated:1 dashed:1 ii:7 rv:12 multiple:1 full:1 afterwards:1 characterized:3 compensate:1 divided:1 involving:1 basic:1 histogram:4 represent:5 normalization:1 lve:16 achieved:3 background:1 whereas:1 zettlemoyer:1 ot:1 eliminates:1 unlike:1 liftability:1 call:2 counting:25 iii:1 easy:1 enough:1 xj:2 independence:1 reduce:1 idea:1 vik:1 enumerating:2 tx1:1 whether:4 six:3 isomorphism:2 reuse:1 ul:2 speaking:1 repeatedly:1 cornerstone:1 useful:2 detailed:2 involve:2 fwo:1 exist:2 restricts:1 taghipour:5 disjoint:1 write:2 group:15 key:1 rewriting:1 opi:1 v1:7 graph:6 excludes:1 sum:17 convert:1 run:1 exponentiation:1 fourth:1 uncertainty:4 named:1 decide:1 vn:3 capturing:1 bound:2 hi:4 distinguish:1 annual:3 constraint:4 prv:5 haimes:1 performing:1 speedup:2 department:1 structured:1 according:2 vk0:1 smaller:3 slightly:1 making:2 s1:1 alse:1 modification:1 intuitively:1 den:5 pr:2 bucket:1 turn:1 count:2 fierens:2 know:1 tractable:1 parametrize:1 decomposing:1 operation:23 vlaanderen:1 apply:1 v2:1 occurrence:1 ype:2 appearing:2 distinguished:1 ahmadi:1 original:1 denotes:2 remaining:1 graphical:6 opportunity:1 placeholder:1 unifying:1 exploit:7 build:1 parametric:1 nr:1 traditional:1 dp:4 parametrized:2 considers:1 trivial:1 relationship:1 gogate:3 providing:1 subproblems:5 abhay:1 perform:1 wannes:1 upper:2 conversion:5 observation:1 markov:2 daan:2 defining:1 relational:3 arbitrary:1 sumproduct:2 introduced:1 propositional:17 smoking:1 pair:4 david:3 z1:1 connection:1 established:1 salvo:1 nip:3 address:1 able:1 poole:4 topdown:1 below:1 hendrik:3 challenge:1 including:1 vi2:1 belief:2 getoor:1 difficulty:1 representing:4 improve:3 identifies:2 sn:1 literature:1 acknowledgement:1 multiplication:2 permutation:3 proportional:1 var:1 degree:1 consistent:1 imposes:1 editor:1 share:1 succinctly:1 supported:1 formal:3 allow:2 exponentiated:1 understand:1 side:1 rodrigo:1 taking:1 deepak:1 van:5 xn:3 evaluating:1 computes:4 ignores:2 exponentiating:1 counted:10 far:1 polynomially:1 approximate:2 compact:2 logic:2 global:1 uai:4 sat:1 summing:2 b1:9 tuples:5 xi:5 factorizing:1 search:4 triplet:1 decomposes:1 ku:1 pack:1 symmetry:16 complex:1 poly:1 constructing:1 domain:18 vj:1 venugopal:1 aistats:2 main:1 whole:3 arise:1 child:15 x1:10 representative:16 vr:1 sub:3 structurally:1 xh:1 exponential:5 third:1 formula:2 down:1 friendship:1 theorem:5 specific:1 choi:1 showing:1 exists:1 intractable:1 consist:1 workshop:2 corr:1 lifting:13 subtree:1 occurring:1 nk:1 gap:1 crea:1 intersection:1 backtracking:1 simply:6 likely:1 agg:3 scalar:1 binding:1 kristian:2 pedro:2 nested:1 corresponds:1 determines:1 satisfies:1 formulated:1 towards:2 replace:1 absence:1 luc:1 nima:5 fahiem:1 typical:1 specifically:1 called:3 secondary:1 isomorphic:22 mathias:1 starai:2 formally:7 latter:1 dtree:55 evaluate:1 handling:1 |
4,599 | 5,161 | Binary to Bushy: Bayesian Hierarchical Clustering
with the Beta Coalescent
Yuening Hu1 , Jordan Boyd-Graber2 , Hal Daum`e III3 , Z. Irene Ying4
1, 3: Computer Science, 2: iSchool and UMIACS, 4: Agricultural Research Service
1, 2, 3: University of Maryland, 4: Department of Agriculture
[email protected], {jbg,hal}@umiacs.umd.edu, [email protected]
Abstract
Discovering hierarchical regularities in data is a key problem in interacting with
large datasets, modeling cognition, and encoding knowledge. A previous Bayesian
solution?Kingman?s coalescent?provides a probabilistic model for data represented as a binary tree. Unfortunately, this is inappropriate for data better described
by bushier trees. We generalize an existing belief propagation framework of
Kingman?s coalescent to the beta coalescent, which models a wider range of tree
structures. Because of the complex combinatorial search over possible structures,
we develop new sampling schemes using sequential Monte Carlo and Dirichlet
process mixture models, which render inference efficient and tractable. We present
results on synthetic and real data that show the beta coalescent outperforms Kingman?s coalescent and is qualitatively better at capturing data in bushy hierarchies.
1
The Need For Bushy Hierarchical Clustering
Hierarchical clustering is a fundamental data analysis problem: given observations, what hierarchical
grouping of those observations effectively encodes the similarities between observations? This is a
critical task for understanding and describing observations in many domains [1, 2], including natural
language processing [3], computer vision [4], and network analysis [5]. In all of these cases, natural
and intuitive hierarchies are not binary but are instead bushy, with more than two children per parent
node. Our goal is to provide efficient algorithms to discover bushy hierarchies.
We review existing nonparametric probabilistic clustering algorithms in Section 2, with particular
focus on Kingman?s coalescent [6] and its generalization, the beta coalescent [7, 8]. While Kingman?s
coalescent has attractive properties?it is probabilistic and has edge ?lengths? that encode how
similar clusters are?it only produces binary trees. The beta coalescent (Section 3) does not have this
restriction. However, na??ve inference is impractical, because bushy trees are more complex: we need
to consider all possible subsets of nodes to construct each internal nodes in the hierarchy.
Our first contribution is a generalization of the belief propagation framework [9] for beta coalescent to
compute the joint probability of observations and trees (Section 3). After describing sequential Monte
Carlo posterior inference for the beta coalescent, we develop efficient inference strategies in Section 4,
where we use proposal distributions that draw on the connection between Dirichlet processes?a
ubiquitous Bayesian nonparametric tool for non-hierarchical clustering?and hierarchical coalescents
to make inference tractable. We present results on both synthetic and real data that show the beta
coalescent captures bushy hierarchies and outperforms Kingman?s coalescent (Section 5).
2
Bayesian Clustering Approaches
Recent hierarchical clustering techniques have been incorporated inside statistical models; this
requires formulating clustering as a statistical?often Bayesian?problem. Heller et al. [10] build
1
binary trees based on the marginal likelihoods, extended by Blundell et al. [11] to trees with arbitrary
branching structure. Ryan et al. [12] propose a tree-structured stick-breaking process to generate trees
with unbounded width and depth, which supports data observations at leaves and internal nodes.1
However, these models do not distinguish edge lengths, an important property in distinguishing how
?tight? the clustering is at particular nodes.
Hierarchical models can be divided into complementary ?fragmentation? and ?coagulation? frameworks [7]. Both produce hierarchical partitions of a dataset. Fragmentation models start with a single
partition and divide it into ever more specific partitions until only singleton partitions remain. Coagulation frameworks repeatedly merge singleton partitions until only one partition remains. Pitman-Yor
diffusion trees [13], a generalization of Dirichlet diffusion trees [14], are an example of a bushy
fragmentation model, and they model edge lengths and build non-binary trees.
Instead, our focus is on bottom-up coalescent models [8], one of the coagulation models and
complementary to diffusion trees, which can also discover hierarchies and edge lengths. In this
model, n nodes are observed (we use both observed to emphasize that nodes are known and leaves to
emphasize topology). These observed nodes are generated through some unknown tree with latent
edges and unobserved internal nodes. Each node (both observed and latent) has a single parent. The
convention in such models is to assume our observed nodes come at time t = 0, and at time ?? all
nodes share a common ur-parent through some sequence of intermediate parents.
Consider a set of n individuals observed at the present (time t = 0). All individuals start in one of n
singleton sets. After time ti , a set of these nodes coalesce into a new node. Once a set merges, their
parent replaces the original nodes. This is called a coalescent event. This process repeats until there
is only one node left, and a complete tree structure ? (Figure 1) is obtained.
Different coalescents are defined by different probabilities of merging a set of nodes. This is called
the coalescent rate, defined by a general family of coalescents: the lambda coalescent [7, 15]. We
represent the rate via the symbol ?kn , the rate at which k out of n nodes merge into a parent node.
From a collection of n nodes, k ? n can coalesce at some coalescent event (k can be different for
different coalescent events). The rate of a fraction ? of the nodes coalescing is given by ? ?2 ?(d?),
where ?(d?) is a finite measure on [0, 1]. So k nodes merge at rate
Z 1
k
?n =
? k?2 (1 ? ?)n?k ?(d?)
(2 ? k ? n).
(1)
0
Choosing different measures yields different coalescents. A degenerate Dirac delta measure at 0
results in Kingman?s coalescent [6], where ?kn is 1 when k = 2 and zero otherwise. Because this
gives zero probability to non-binary coalescent events, this only creates binary trees.
Alternatively, using a beta distribution BETA(2 ? ?, ?) as the measure ? yields the beta coalescent.
When ? is closer to 1, the tree is bushier; as ? approaches 2, it becomes Kingman?s coalescent. If we
have ni?1 nodes at time ti?1 in a beta coalescent, the rate ?knii?1 for a children set of ki nodes at time
ti and the total rate ?ni?1 of any children set merging?summing over all possible mergers?is
?knii?1 =
ni?1
X
?(ki ? ?)?(ni?1 ? ki + ?)
and ?ni?1 =
?(2 ? ?)?(?)?(ni?1 )
ni?1 ki
ki ?ni?1 .
(2)
ki =2
Each coalescent event also has an edge length?duration??i . The duration of an event comes from
an exponential distribution, ?i ? exp(?ni?1 ), and the parent node forms at time ti = ti?1 ? ?i .
Shorter durations mean that the children more closely resemble their parent (the mathematical basis
for similarity is specified by a transition kernel, Section 3).
Analogous to Kingman?s coalescent, the prior probability of a complete tree ? is the product of all of
its constituent coalescent events i = 1, . . . m, merging ki children after duration ?i ,
p(?) =
m
Y
i=1
p(ki |ni?1 ) ? p(?i |ki , ni?1 ) =
| {z }
|
{z
}
Merge ki nodes After duration ?i
m
Y
?knii?1 ? exp(??ni?1 ?i ).
(3)
i=1
1
This is appropriate where the entirety of a population is known?both ancestors and descendants. We focus
on the case where only the descendants are known. For a concrete example, see Section 5.2.
2
Algorithm 1 MCMC inference for generating a tree
1: for Particle s = 1, 2, ? ? ? , S do
2:
Initialize ns = n, i = 0, ts0 = 0, w0s = 1.
3:
Initialize the node set V s = {?0 , ?1 , ? ? ? , ?n }.
4: while ?s ? {1 ? ? ? S} where ns > 1 do
5:
Update i = i + 1.
6:
for Particle s = 1, 2, ? ? ? , S do
(a) Kingman?s coalescent
7:
if ns == 1 then
8:
Continue.
9:
Propose a duration ?is by Equation 10.
10:
Set coalescent time tsi = tsi?1 ? ?is .
11:
Sample partitions psi from DPMM.
12:
Propose a set ?~csi according to Equation 11.
13:
Update weight wis by Equation 13.
14:
Update ns = ns ? |?~csi | + 1.
(b) the beta coalescent
15:
Remove ?~csi from V s , add ?si to V s .
16:
Compute effective sample size ESS [16].
Figure 1: The beta coalescent can merge four simi17:
if ESS < S/2 then
lar nodes at once, while Kingman?s coalescent only
18:
Resample particles [17].
merges two each time.
3
Beta Coalescent Belief Propagation
The beta coalescent prior only depends on the topology of the tree. In real clustering applications, we
also care about a node?s children and features. In this section, we define the nodes and their features,
and then review how we use message passing to compute the probabilities of trees.
An internal node ?i is defined as the merger of other nodes. The children set of node ?i , ?~ci , coalesces
into a new node ?i ? ?b?~ci ?b . This encodes the identity of the nodes that participate in specific
coalescent events; Equation 3, in contrast, only considers the number of nodes involved in an event.
In addition, each node is associated with a multidimensional feature vector yi .
Two terms specify the relationship between nodes? features: an initial distribution p0 (yi ) and a
transition kernel ?ti tb (yi , yb ). The initial distribution can be viewed as a prior or regularizer for
feature representations. The transition kernel encourages a child?s feature yb (at time tb ) to resemble
feature yi (formed at ti ); shorter durations tb ? ti increase the resemblance.
Intuitively, the transition kernel can be thought as a similarity score; the more similar the features
are, the more likely nodes are. For Brownian diffusion (discussed in Section 4.3), the transition
kernel follows a Gaussian distribution centered at a feature. The covariance matrix ? is decided
by the mutation rate ? [18, 9], the probability of a mutation in an individual. Different kernels
(e.g., multinomial, tree kernels) can be applied depending on modeling assumptions of the feature
representations.
To compute the probability of the beta coalescent tree ? and observed data x, we generalize the
belief propagation framework used by Teh et al. [9] for Kingman?s coalescent; this is a more scalable
alternative to other approaches for computing the probability of a Beta coalescent tree [19]. We
define a subtree structure ?i = {?i?1 , ?i , ?~ci }, thus the tree ?m after the final coalescent event m is a
complete tree ?. The message for node ?i marginalizes over the features of the nodes in its children
set.2 The total message for a parent node ?i is
YZ
?1
M?i (yi ) = Z?i (x|?i )
?ti tb (yi , yb )M?b (yb )dyb .
(4)
b?~
ci
where Z?i (x|?i ) is the local normalizer, which can be computed as the combination of initial
distribution and messages from a set of children,
Z
Y Z
Z?i (x|?i ) = p0 (yi )
?ti tb (yi , yb )M?b (yb )dyb dyi .
(5)
b?~
ci
2
When ?b is a leaf, the message M?b (yb ) is a delta function centered on the observation.
3
Recursively performing this marginalization through message passing provides the joint probability
of a complete tree ? and the observations x. At the root,
Z
Z?? (x|?m ) = p0 (y?? )???,tm (y?? , ym )M?m (ym )dym dy??
(6)
where p0 (y?? ) is the initial feature distribution and m is the number of coalescent events. This gives
the marginal probability of the whole tree,
p(x|?) = Z?? (x|?m )
m
Y
Z?i (x|?i ),
(7)
i=1
The joint probability of a tree ? combines the prior (Equation 3) and likelihood (Equation 7),
p(x, ?) = Z?? (x|?m )
m
Y
?knii?1 exp(??ni?1 ?i ) ? Z?i (x|?i ).
(8)
i=1
3.1
Sequential Monte Carlo Inference
Sequential Monte Carlo (SMC)?often called particle filters?estimates a structured sequence of
hidden variables based on observations [20]. For coalescent models, this estimates the posterior
distribution over tree structures given observations x. Initially (i = 0) each observation is in a
singleton cluster;3 in subsequent particles (i > 0), points coalesce into more complicated tree
structures ?is , where s is the particle index and we add superscript s to all the related notations to
distinguish between particles. We use sequential importance resampling [21, SIR] to weight each
particle s at time ti , denoted as wis .
The weights from SIR approximate the posterior. Computing the weights requires a conditional distris
bution of data given a latent state p(x|?is ), a transition distribution between latent states p(?is |?i?1
),
s s
and a proposal distribution f (?i |?i?1 , x). Together, these distributions define weights
s
wis = wi?1
s
p(x | ?is )p(?is | ?i?1
)
.
s
s
f (?i | ?i?1 , x)
(9)
Then we can approximate the posterior distribution of the hidden structure using the normalized
weights, which become more accurate with more particles.
To apply SIR inference to belief propagation with the beta coalescent prior, we first define the particle
s
space structure. The sth particle represents a subtree ?i?1
at time tsi?1 , and a transition to a new
s
s
s
subtree ?i takes a set of nodes ?~ci from ?i?1 , and merges them at tsi , where tsi = tsi?1 ? ?is and
s
?is = {?i?1
, ?is , ?~csi }. Our proposal distribution must provide the duration ?is and the children set ?~sci
s
to merge based on the previous subtree ?i?1
.
We propose the duration ?is from the prior exponential distribution and propose a children set from the
posterior distribution based on the local normalizers. 4 This is the ?priorpost? method in Teh et al. [9].
However, this approach
is intractable. Given
ni?1 nodes at time ti , we must consider all possible
ni?1
ni?1
children sets ni?1
+
+
?
?
?
+
.
The computational complexity grows from O(n2i?1 )
n
i?1
2
3
ni?1
(Kingman?s coalescent) to O(2
) (beta coalescent).
4
Efficiently Finding Children Sets with DPMM
We need a more efficient way to consider possible children sets. Even for Kingman?s coalescent, which
only considers pairs of nodes, Gorur et al. [22] do not exhaustively consider all pairs. Instead, they
use data structures from computational geometry to select the R closest pairs as their restriction set,
reducing inference to O(n log n). While finding closest pairs is a traditional problem in computational
geometry, discovering arbitrary-sized sets is less studied.
3
The relationship between time and particles is non-intuitive. Time t goes backward with subsequent particles.
When we use time-specific adjectives for particles, this is with respect to inference.
4
This is a special case of Section 4.2?s algorithm, where the restriction set ?i is all possible subsets.
4
In this section, we describe how we use a Dirichlet process mixture model [23, DPMM] to discover a
restriction set ?, integrating DPMMs into the SMC proposal. We first briefly review what DPMMs are,
describe why they are attractive, and then describe how we incorporate DPMMs in SMC inference.
The DPMM is defined by a concentration ? and a base distribution G0 . A distribution over mixtures is
drawn from a Dirichlet process (DP): G ? DP(?, G0 ). Each observation xi is assigned to a mixture
component ?i drawn from G. Because the Dirichlet process is a discrete distribution, observations i
and j can have the same mixture component (?i = ?j ). When this happens, points are said to be in
the same partition. Posterior inference can discover a distribution over partitions. A full derivation of
these sampling equations appears in the supplemental material.
4.1
Attractive Properties of DPMMs
DPMM s and Coalescents Berestycki et al. [8] showed that the distribution over partitions in a
Dirichlet process is equivalent to the distribution over coalescents? allelic partitions?the set of
members that have the same feature representation?when the mutation rate ? of the associated
kernel is half of the Dirichlet concentration ? (Section 3). For Brownian diffusion, we can connect
DPMM with coalescents by setting the kernel covariance ? = ?I to ? = ?/2I.
The base distribution G0 is also related with nodes? feature. The base distribution G0 of a Dirichlet
process generates the probability measure G for each block, which generates the nodes in a block.
As a result, we can select a base distribution which fits the distribution of the samples in coalescent
process. For example, if we use Gaussian distribution for the transition kernel and prior, a Gaussian
is also appropriate as the DPMM base distribution.
Effectiveness as a Proposal The necessary condition for a valid proposal [24] is that it should have
support on a superset of the true posterior. In our case, the distribution over partitions provided by
the DPMM considers all possible children sets that could be merged in the coalescent. Thus the new
proposal with DPMM satisfies this requirement, and it is a valid proposal.
In addition, Chen [25] gives a set of desirable criteria for a good proposal distribution: accounts for
outliers, considers the likelihood, and lies close to the true posterior. The DPMM fulfills these criteria.
First, the DPMM provides a distribution over all partitions. Varying the concentration parameter ? can
control the length of the tail of the distribution over partitions. Second, choosing the base distribution
of the DPMM appropriately models the feature likelihood; i.e., ensuring the DPMM places similar
nodes together in a partition with high probability. Third, the DPMM qualitatively provides reasonable
children sets when compared with exhaustively considering all children sets (Figure 2(c)).
4.2
Incorporating DPMM in SMC Proposals
To address the inference intractability in Section 3.1, we use the DPMM to obtain a distribution over
partitions of nodes. Each partition contains clusters of nodes, and we take a union over all partitions
to create a restriction set ?i = {?i1 , ?i2 , ? ? ? }, where each ?ij is a subset of the ni?1 nodes. A
standard Gibbs sampler provides these partitions (see supplemental).
With this restriction set ?i , we propose the duration time ?is from the exponential distribution and
propose a children set ?~csi based on the local normalizers
s
Z?i (x|?i?1
, ?is , ?~sci ) s
s
? I ?~ci ? ?si , (11)
fi (?~csi |?is , ?i?1
)=
fi (?is ) = ?sni?1 exp(??sni?1 ?is ) (10)
Z0
where ?si restricts the candidate children sets, I is the indicator, and we replace Z?i (x|?is ) with
s
Z?i (x|?i?1
, ?is , ?~sci ) since they are equivalent here. The normalizer is
X
X
s
s
Z0 =
Z?i (x|?i?1
, ?is , ?~0c ) ? I [?~0c ? ?si ] =
Z?i (x|?i?1
, ?is , ?~c0 ).
(12)
0
0
s
?~c ??i
?~c
th
Applying the true distribution (the i multiplicand from Equation 8) and the proposal distribution
(Equation 10 and Equation 11) to the SIR weight update (Equation 9),
|?~sc | P
s
i
?ni?1
? ?0 ??s Z?i (x|?i?1
, ?is , ?~c0 )
i
s
~
c
wis = wi?1
,
(13)
?sni?1
5
|?~sc |
i
where |?~sci | is the size of children set ?~csi ; parameter ?ni?1
is the rate of the children set ?~sci (Equas
tion 2); and ?ni?1 is the rate of all possible sets given a total number of nodes ni?1 (Equation 2).
We can view this new proposal as a coarse-to-fine process: DPMM proposes candidate children
sets; SMC selects a children set from DPMM to coalesce. Since the coarse step is faster and filters
?bad? children sets, the slower finer step considers fewer children sets, saving computation time
(Algorithm 1). If ?i has all children sets, it recovers exhaustive SMC. We estimate the effective sample
size [16] and resample [17] when needed. For smaller sets, the DPMM is sometimes impractical (and
only provides singleton clusters). In such cases it is simpler to enumerate all children sets.
4.3
Example Transition Kernel: Brownian Diffusion
This section uses Brownian diffusion as an example for message passing framework. The initial
distribution p0 (y) of each node is N (0, ?); the transition kernel ?ti tb (y, ?) is a Gaussian centered
at y with variance (ti ? tb )?, where ? = ?I, ? = ?/2, ? is the concentration parameter of DPMM.
Then the local normalizer Z?i (x|?i ) is
Z
Y
Z?i (x|?i ) = N (yi ; 0, ?)
N (yi ; y?b , ?(v?b + tb ? ti ))dyi ,
(14)
b?~
ci
and the node message M?i (yi ) is normally distributed M?i (yi ) ? N (yi ; y??i , ?v?i ), where
X
?1
X
y??b
?1
v?i =
(v?b + tb ? ti )
,
y??i =
v? i .
b?~
ci
b?~
c i v? b + t b ? t i
5
Experiments: Finding Bushy Trees
In this section, we compare trees built by the beta coalescent (beta) against those built by Kingman?s
coalescent (kingman) and hierarchical agglomerative clustering [26, hac] on both synthetic and real
data. We show beta performs best and can capture data in more interpretable, bushier trees.
Setup The parameter ? for the beta coalescent is between 1 and 2. The closer ? is to 1, bushier the
tree is, and we set ? = 1.2.5 We set the mutation rate as 1, thus the DPMM parameter is initialized
as ? = 2, and updated using slice sampling [27]. All experiments use 100 initial iterations of DPMM
inference with 30 more iterations after each coalescent event (forming a new particle).
Metrics We use three metrics to evaluate the quality of the trees discovered by our algorithm:
purity, subtree and path length. The dendrogram purity score [28, 10] measures how well the leaves
in a subtree belong to the same class. For any two leaf nodes, we find the least common subsumer
node s and?for the subtree rooted at s?measure the fraction of leaves with same class labels. The
subtree score [9] is the ratio between the number of internal nodes with all children in the same
class and the total number of internal nodes. The path length score is the average difference?over
all pairs?of the lowest common subsumer distance between the true tree and the generated tree,
where the lowest common subsumer distance is the distance between the root and the lowest common
subsumer of two nodes. For purity and subtree, higher is better, while for length, lower is better.
Scores are in expectation over particles and averaged across chains.
5.1
Synthetic Hierarchies
To test our inference method, we generated synthetic data with edge length (full details available
in the supplemental material); we also assume each child of the root has a unique label and the
descendants also have the same label as their parent node (except the root node).
We compared beta against kingman and hac by varying the number of observations (Figure 2(a))
and feature dimensions (Figure 2(b)). In both cases, beta is comparable to kingman and hac (no
edge length). While increasing the feature dimension improves both scores, more observations do
not: for synthetic data, a small number of observations suffice to construct a good tree.
5
With DPMM proposals, ? has a negligible effect, so we elide further analysis for different ? values.
6
0.8
0.6
beta
hac
kingman
0.4
0.8
0.6
beta
hac
kingman
8
10
beta
kingman
0.4
40
60
80
100
beta
kingman
length
Number of Observations
Scores
0.6
0.4
0.2
0.0
4
6
length
Dimension
0.6
0.4
0.2
0.0
20
40
60
80
100
0.8
0.6
beta
2
4
6
length
Dimension
0.6
8
enum
beta
10
enum
0.4
0.2
0.0
2
Number of Observations
(a) Increasing observations
1.0
0.4
2
Scores
20
Scores
purity
1.0
Scores
purity
Scores
Scores
purity
1.0
4
6
8
Dimension
(b) Increasing dimension
10
2
4
6
8
10
Dimension
(c) beta v.s. enum
Figure 2: Figure 2(a) and 2(b) show the effect of changing the underlying data size or number
of dimension. Figure 2(c) shows that our DPMM proposal for children sets is comparable to an
exhaustive enumeration of all possible children sets (enum).
To evaluate the effectiveness of using our DPMM as a proposal distribution, we compare exhaustively
enumerating all children set candidates (enum) while keeping the SMC otherwise unchanged; this
experiment uses ten data points (enum is completely intractable on larger data). Beta uses the DPMM
and achieved similar accuracy (Figure 2(c)) while greatly improving efficiency.
5.2
Human Tissue Development
Our first real dataset is based on the developmental biology of human tissues. As a human develops,
tissues specialize, starting from three embryonic germ layers: the endoderm, ectoderm, and mesoderm.
These eventually form all human tissues. For example, one developmental pathway is ectoderm ?
neural crest ? cranial neural crest ? optic vesicle ? cornea. Because each germ layer specializes
into many different types of cells at specific times, it is inappropriate to model this development as a
binary tree, or with clustering models lacking path lengths.
Historically, uncovering these specialization pathways is a painstaking process, requiring inspection
of embryos at many stages of development; however, massively parallel sequencing data make it
possible to efficiently form developmental hypotheses based on similar patterns of gene expression.
To investigate this question we use the transcriptome of 27 tissues with known, unambiguous,
time-specific lineages [29]. We reduce the original 182727 dimensions via principle component
analysis [30, PCA]. We use five chains with five particles per chain.
Using reference developmental trees, beta performs better on all three scores (Table 1) because beta
builds up a bushy hierarchy more similar to the true tree. The tree recovered by beta (Figure 3)
reflects human development. The first major differentiation is the division of embryonic cells into
three layers of tissue: endoderm, mesoderm, and ectoderm. These go on to form almost all adult
organs and cells. The placenta (magenta), however, forms from a fourth cell type, the trophoblast;
this is placed in its own cluster at the root of the tree. It also successfully captures ectodermal tissue
lineage. However, mesodermic and endodermic tissues, which are highly diverse, do not cluster as
well. Tissues known to secrete endocrine hormones (dashed borders) cluster together.
5.3
Clustering 20-newsgroups Data
Following Heller et al. [10], we also compare the three models on 20-newsgroups,6 a multilevel
hierarchy first dividing into general areas (rec, space, and religion) before specializing into areas such
as baseball or hockey.7 This true hierarchy is inset in the bottom right of Figure 4, and we assume
each edge has the same length. We apply latent Dirichlet allocation [31] with 50 topics to this corpus,
and use the topic distribution for each document as the document feature. We use five chains with
eighty particles per chain.
6
http://qwone.com/?jason/20Newsgroups/
7
We use ?rec.autos?, ?rec.sport.baseball?, ?rec.sport.hockey?, ?sci.space? newsgroups but also?in contrast
to Heller et al. [10]?added ?soc.religion.christian?.
7
Placenta
Stomach
Pancreas
ectoderm
mesoderm
placenta
endoderm
Bone
Marrow
Thyroid
Colon
Kidney
Heart
PeripheralBlood
Lymphocytes
Brain
Hypothalamus
Brain
Amygdala
Spinal
Cord
Brain
Cerebellum
BrainCaudate
Nucleus
Doc Label
rec.sport.baseball rec.autos
rec.sport.hocky
sci.space
soc.religion.christian
Trachea
Brain
Thalamus
BrainCorpus
Callosum
Prostate
Uterus
Lung
Spleen
Thymus
Small
Intestine
Retina
Monocytes
Mammary
Gland
...
purity ?
subtree ?
length ?
...
...
...
...
Bladder
Figure 3: One sample hierarchy of human tissue
from beta. Color indicates germ layer origin of
tissue. Dashed border indicates secretory function. While neural tissues from the ectoderm
were clustered correctly, some mesoderm and
endoderm tissues were commingled. The cluster
also preferred placing secretory tissues together
and higher in the tree.
hac
0.453
0.240
?
True Tree
Figure 4: One sample hierarchy of the 20newsgroups from beta. Each small square is a document colored by its class label. Large rectangles
represent a subtree with all the enclosed documents as leaf nodes. Most of the documents from
the same group are clustered together; the three
?rec? groups are merged together first, and then
merged with the religion and space groups.
Biological Data
kingman
beta
0.474 ? 0.029 0.492 ? 0.028
0.302 ? 0.033 0.331 ? 0.050
0.654 ? 0.041 0.586 ? 0.051
hac
0.465
0.571
?
20-newsgroups Data
kingman
beta
0.510 ? 0.047 0.565 ? 0.081
0.651 ? 0.013 0.720 ? 0.013
0.477 ? 0.027 0.333 ? 0.047
Table 1: Comparing the three models: beta performs best on all three scores.
As with the biological data, beta performs best on all scores for 20-newsgroups. Figure 4 shows a
bushy tree built by beta, which mostly recovered the true hierarchy. Documents within a newsgroup
merge first, then the three ?rec? groups, followed by ?space? and ?religion? groups. We only use
topic distribution as features, so better results could be possible with more comprehensive features.
6
Conclusion
This paper generalizes Bayesian hierarchical clustering, moving from Kingman?s coalescent to the
beta coalescent. Our novel inference scheme based on SMC and DPMM make this generalization
practical and efficient. This new model provides a bushier tree, often a more realistic view of data.
While we only consider real-valued vectors, which we model through the ubiquitous Gaussian,
other likelihoods might be better suited to other applications. For example, for discrete data such
as in natural language processing, a multinomial likelihood may be more appropriate. This is a
straightforward extension of our model via other transition kernels and DPMM base distributions.
Recent work uses the coalescent as a means of producing a clustering in tandem with a downstream
task such as classification [32]. Hierarchies are often taken a priori in natural language processing.
Particularly for linguistic tasks, a fully statistical model like the beta coalescent that jointly learns the
hierarchy and a downstream task could improve performance in dependency parsing [33] (clustering
parts of speech), multilingual sentiment [34] (finding sentiment-correlated words across languages),
or topic modeling [35] (finding coherent words that should co-occur in a topic).
Acknowledgments
We would like to thank the anonymous reviewers for their helpful comments, and thank H?ector
Corrada Bravo for pointing us to human tissue data. This research was supported by NSF grant
#1018625. Any opinions, findings, conclusions, or recommendations expressed here are those of the
authors and do not necessarily reflect the view of the sponsor.
8
References
[1] Kaufman, L., P. Rousseeuw. Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley,
1990.
[2] Jain, A. K. Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8):651?666, 2010.
[3] Brown, P. F., V. J. D. Pietra, P. V. deSouza, et al. Class-based n-gram models of natural language.
Computational Linguistics, 18:18?4, 1990.
[4] Bergen, J., P. Anandan, K. Hanna, et al. Hierarchical model-based motion estimation. In ECCV. 1992.
[5] Girvan, M., M. E. J. Newman. Community structure in social and biological networks. PNAS, 99:7821?
7826, 2002.
[6] Kingman, J. F. C. On the genealogy of large populations. Journal of Applied Probability, 19:27?43, 1982.
[7] Pitman, J. Coalescents with multiple collisions. The Annals of Probability, 27:1870?1902, 1999.
[8] Berestycki, N. Recent progress in coalescent theory. In Ensaios Matematicos, vol. 16. 2009.
[9] Teh, Y. W., H. Daum?e III, D. M. Roy. Bayesian agglomerative clustering with coalescents. In NIPS. 2008.
[10] Heller, K. A., Z. Ghahramani. Bayesian hierarchical clustering. In ICML. 2005.
[11] Blundell, C., Y. W. Teh, K. A. Heller. Bayesian rose trees. In UAI. 2010.
[12] Adams, R., Z. Ghahramani, M. Jordan. Tree-structured stick breaking for hierarchical data. In NIPS. 2010.
[13] Knowles, D., Z. Ghahramani. Pitman-Yor diffusion trees. In UAI. 2011.
[14] Neal, R. M. Density modeling and clustering using Dirichlet diffusion trees. Bayesian Statistics, 7:619?629,
2003.
[15] Sagitov, S. The general coalescent with asynchronous mergers of ancestral lines. Journal of Applied
Probability, 36:1116?1125, 1999.
[16] Neal, R. M. Annealed importance sampling. Technical report 9805, University of Toronto, 1998.
[17] Fearhhead, P. Sequential Monte Carlo method in filter theory. PhD thesis, University of Oxford, 1998.
[18] Felsenstein, J. Maximum-likelihood estimation of evolutionary trees from continuous characters. Am J
Hum Genet, 25(5):471?492, 1973.
[19] Birkner, M., J. Blath, M. Steinrucken. Importance sampling for lambda-coalescents in the infinitely many
sites model. Theoretical population biology, 79(4):155?73, 2011.
[20] Doucet, A., N. De Freitas, N. Gordon, eds. Sequential Monte Carlo methods in practice. 2001.
[21] Gordon, N., D. Salmond, A. Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation.
IEEE Proceedings F, Radar and Signal Processing, 140(2):107?113, 1993.
[22] G?or?ur, D., L. Boyles, M. Welling. Scalable inference on Kingman?s coalescent using pair similarity. JMLR,
22:440?448, 2012.
[23] Antoniak, C. E. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.
The Annals of Statistics, 2(6):1152?1174, 1974.
[24] Cappe, O., S. Godsill, E. Moulines. An overview of existing methods and recent advances in sequential
Monte Carlo. PROCEEDINGS-IEEE, 95(5):899, 2007.
[25] Chen, Z. Bayesian filtering: From kalman filters to particle filters, and beyond. McMaster, [Online], 2003.
[26] Eads, D. Hierarchical clustering (scipy.cluster.hierarchy). SciPy, 2007.
[27] Neal, R. M. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[28] Powers, D. M. W. Unsupervised learning of linguistic structure an empirical evaluation. International
Journal of Corpus Linguistics, 2:91?131, 1997.
[29] Jongeneel, C., M. Delorenzi, C. Iseli, et al. An atlas of human gene expression from massively parallel
signature sequencing (mpss). Genome Res, 15:1007?1014, 2005.
[30] Shlens, J. A tutorial on principal component analysis. In Systems Neurobiology Laboratory, Salk Institute
for Biological Studies. 2005.
[31] Blei, D. M., A. Ng, M. Jordan. Latent Dirichlet allocation. JMLR, 2003.
[32] Rai, P., H. Daum?e III. The infinite hierarchical factor regression model. In NIPS. 2008.
[33] Koo, T., X. Carreras, M. Collins. Simple semi-supervised dependency parsing. In ACL. 2008.
[34] Boyd-Graber, J., P. Resnik. Holistic sentiment analysis across languages: Multilingual supervised latent
Dirichlet allocation. In EMNLP. 2010.
[35] Andrzejewski, D., X. Zhu, M. Craven. Incorporating domain knowledge into topic modeling via Dirichlet
forest priors. In ICML. 2009.
9
| 5161 |@word briefly:1 c0:2 covariance:2 p0:5 recursively:1 initial:6 contains:1 score:15 document:6 dpmms:4 outperforms:2 existing:3 freitas:1 recovered:2 com:1 comparing:1 si:4 must:2 parsing:2 john:1 subsequent:2 partition:19 realistic:1 christian:2 remove:1 atlas:1 interpretable:1 update:4 resampling:1 half:1 discovering:2 leaf:7 fewer:1 merger:3 inspection:1 es:2 smith:1 painstaking:1 colored:1 blei:1 provides:7 coarse:2 node:62 toronto:1 coagulation:3 simpler:1 five:3 unbounded:1 mathematical:1 beta:46 become:1 descendant:3 specialize:1 combine:1 pathway:2 inside:1 secretory:2 priorpost:1 brain:4 moulines:1 gov:1 enumeration:1 inappropriate:2 considering:1 agricultural:1 becomes:1 provided:1 discover:4 notation:1 increasing:3 suffice:1 underlying:1 tandem:1 lowest:3 what:2 corrada:1 kaufman:1 supplemental:3 unobserved:1 finding:7 differentiation:1 impractical:2 sni:3 multidimensional:1 ti:16 stick:2 control:1 normally:1 grant:1 producing:1 before:1 service:1 negligible:1 local:4 cornea:1 encoding:1 oxford:1 path:3 koo:1 merge:7 might:1 acl:1 studied:1 co:1 smc:8 range:1 averaged:1 decided:1 unique:1 practical:1 acknowledgment:1 union:1 block:2 practice:1 germ:3 area:2 empirical:1 thought:1 boyd:2 word:2 integrating:1 close:1 applying:1 restriction:6 equivalent:2 reviewer:1 annealed:1 go:2 straightforward:1 starting:1 duration:10 kidney:1 lineage:2 boyle:1 scipy:2 shlens:1 population:3 analogous:1 updated:1 annals:3 hierarchy:16 distinguishing:1 us:4 hypothesis:1 origin:1 roy:1 recognition:1 particularly:1 rec:9 bottom:2 observed:7 capture:3 cord:1 pancreas:1 irene:1 csi:7 rose:1 developmental:4 complexity:1 exhaustively:3 radar:1 signature:1 tight:1 vesicle:1 creates:1 division:1 efficiency:1 baseball:3 basis:1 completely:1 joint:3 multiplicand:1 represented:1 regularizer:1 derivation:1 jain:1 effective:2 describe:3 monte:7 ts0:1 sc:2 newman:1 choosing:2 exhaustive:2 larger:1 valued:1 otherwise:2 coalescents:10 statistic:3 jointly:1 final:1 superscript:1 online:1 sequence:2 propose:7 product:1 holistic:1 degenerate:1 intuitive:2 dirac:1 constituent:1 parent:10 regularity:1 cluster:10 requirement:1 produce:2 generating:1 adam:1 wider:1 depending:1 develop:2 ij:1 progress:1 dividing:1 soc:2 c:1 resemble:2 come:2 entirety:1 convention:1 closely:1 merged:3 filter:5 centered:3 coalescent:59 human:8 bladder:1 material:2 opinion:1 multilevel:1 generalization:4 clustered:2 anonymous:1 biological:4 ryan:1 extension:1 genealogy:1 exp:4 enum:6 cognition:1 pointing:1 major:1 agriculture:1 resample:2 estimation:3 combinatorial:1 label:5 organ:1 create:1 successfully:1 tool:1 reflects:1 callosum:1 gaussian:6 varying:2 linguistic:2 encode:1 focus:3 sequencing:2 likelihood:7 indicates:2 greatly:1 contrast:2 normalizer:5 am:1 colon:1 inference:17 helpful:1 bergen:1 initially:1 hidden:2 ancestor:1 i1:1 selects:1 uncovering:1 classification:1 denoted:1 priori:1 proposes:1 development:4 special:1 initialize:2 marginal:2 construct:2 once:2 saving:1 ng:1 sampling:6 biology:2 represents:1 placing:1 icml:2 unsupervised:1 report:1 prostate:1 develops:1 eighty:1 mammary:1 retina:1 gordon:2 ve:1 comprehensive:1 individual:3 pietra:1 geometry:2 gorur:1 message:8 investigate:1 highly:1 evaluation:1 mixture:6 dyi:2 chain:5 accurate:1 edge:9 closer:2 necessary:1 shorter:2 tree:54 divide:1 initialized:1 re:1 theoretical:1 modeling:5 ar:1 mesoderm:4 subset:3 lymphocyte:1 kn:2 connect:1 dependency:2 synthetic:6 density:1 fundamental:1 international:1 ancestral:1 probabilistic:3 ym:2 together:6 concrete:1 na:1 thesis:1 reflect:1 marginalizes:1 emnlp:1 andrzejewski:1 lambda:2 coalesces:1 mcmaster:1 kingman:27 account:1 singleton:5 de:1 depends:1 tion:1 root:5 view:3 jason:1 bone:1 bution:1 start:2 lung:1 complicated:1 parallel:2 mutation:4 contribution:1 square:1 ni:23 formed:1 accuracy:1 variance:1 efficiently:2 yield:2 generalize:2 bayesian:13 carlo:7 finer:1 tissue:15 ed:1 against:2 involved:1 associated:2 psi:1 recovers:1 dataset:2 stomach:1 knowledge:2 color:1 improves:1 ubiquitous:2 cappe:1 appears:1 higher:2 supervised:2 specify:1 yb:7 stage:1 dendrogram:1 until:3 dyb:2 nonlinear:1 propagation:5 gland:1 lar:1 quality:1 resemblance:1 hal:2 grows:1 effect:2 normalized:1 true:8 requiring:1 qwone:1 brown:1 assigned:1 laboratory:1 i2:1 neal:3 attractive:3 cerebellum:1 branching:1 width:1 encourages:1 rooted:1 unambiguous:1 criterion:2 complete:4 performs:4 motion:1 novel:2 fi:2 common:5 multinomial:2 overview:1 spinal:1 discussed:1 tail:1 belong:1 gibbs:1 particle:19 language:6 moving:1 similarity:4 ector:1 add:2 base:7 carreras:1 posterior:8 brownian:4 recent:4 closest:2 showed:1 own:1 massively:2 binary:9 continue:1 transcriptome:1 yi:13 jbg:1 care:1 anandan:1 purity:7 signal:1 semi:1 dashed:2 full:2 desirable:1 pnas:1 thalamus:1 multiple:1 hormone:1 technical:1 faster:1 divided:1 specializing:1 sponsor:1 ensuring:1 scalable:2 regression:1 vision:1 metric:2 expectation:1 iteration:2 represent:2 kernel:13 sometimes:1 achieved:1 cell:4 proposal:15 addition:2 fine:1 appropriately:1 hac:7 umiacs:2 umd:2 comment:1 member:1 effectiveness:2 jordan:3 intermediate:1 iii:2 superset:1 newsgroups:7 marginalization:1 fit:1 topology:2 reduce:1 tm:1 genet:1 enumerating:1 blundell:2 specialization:1 expression:2 ischool:1 pca:1 sentiment:3 render:1 speech:1 passing:3 repeatedly:1 enumerate:1 bravo:1 collision:1 rousseeuw:1 nonparametric:3 ten:1 generate:1 http:1 restricts:1 nsf:1 tutorial:1 delta:2 per:3 correctly:1 diverse:1 discrete:2 vol:1 group:6 key:1 four:1 drawn:2 changing:1 diffusion:9 backward:1 rectangle:1 bushy:11 fraction:2 downstream:2 year:1 letter:1 fourth:1 place:1 family:1 reasonable:1 almost:1 spleen:1 knowles:1 draw:1 doc:1 dy:1 comparable:2 capturing:1 ki:10 layer:4 followed:1 distinguish:2 replaces:1 occur:1 optic:1 encodes:2 generates:2 thyroid:1 formulating:1 performing:1 department:1 structured:3 according:1 rai:1 combination:1 craven:1 felsenstein:1 remain:1 smaller:1 across:3 character:1 ur:2 wi:6 sth:1 happens:1 intuitively:1 outlier:1 embryo:1 heart:1 taken:1 equation:12 remains:1 describing:2 eventually:1 needed:1 tractable:2 available:1 generalizes:1 apply:2 hierarchical:17 appropriate:3 alternative:1 slower:1 original:2 coalescing:1 clustering:21 dirichlet:15 linguistics:2 daum:3 ghahramani:3 build:3 yz:1 unchanged:1 g0:4 question:1 added:1 hum:1 strategy:1 concentration:4 traditional:1 said:1 evolutionary:1 dp:2 distance:3 thank:2 maryland:1 sci:7 participate:1 topic:6 agglomerative:2 considers:5 length:17 kalman:1 index:1 relationship:2 ratio:1 ying:1 setup:1 unfortunately:1 mostly:1 godsill:1 dpmm:28 unknown:1 teh:4 observation:19 datasets:1 finite:1 extended:1 incorporated:1 ever:1 neurobiology:1 interacting:1 discovered:1 arbitrary:2 community:1 pair:6 specified:1 connection:1 coalesce:4 coherent:1 merges:3 nip:3 address:1 adult:1 beyond:2 pattern:2 tb:9 adjective:1 built:3 including:1 belief:5 power:1 critical:1 event:12 natural:5 indicator:1 hu1:1 zhu:2 scheme:2 improve:1 historically:1 specializes:1 auto:2 review:3 understanding:1 heller:5 prior:8 sir:4 lacking:1 fully:1 girvan:1 allocation:3 filtering:1 enclosed:1 nucleus:1 usda:1 berestycki:2 principle:1 intractability:1 share:1 embryonic:2 eccv:1 repeat:1 placed:1 keeping:1 supported:1 asynchronous:1 salmond:1 institute:1 pitman:3 yor:2 distributed:1 slice:2 intestine:1 depth:1 dimension:9 transition:11 valid:2 amygdala:1 gram:1 genome:1 author:1 qualitatively:2 collection:1 social:1 welling:1 crest:2 approximate:2 emphasize:2 preferred:1 multilingual:2 gene:2 desouza:1 tsi:6 uai:2 doucet:1 summing:1 corpus:2 xi:1 alternatively:1 search:1 latent:7 monocyte:1 continuous:1 why:1 table:2 hockey:2 improving:1 forest:1 hanna:1 complex:2 necessarily:1 domain:2 marrow:1 whole:1 border:2 child:33 complementary:2 graber:1 site:1 resnik:1 salk:1 wiley:1 n:5 exponential:3 lie:1 candidate:3 breaking:2 jmlr:2 third:1 learns:1 z0:2 magenta:1 bad:1 specific:5 inset:1 symbol:1 grouping:1 intractable:2 incorporating:2 sequential:8 effectively:1 merging:3 fragmentation:3 ci:9 importance:3 phd:1 subtree:11 chen:2 suited:1 antoniak:1 likely:1 infinitely:1 forming:1 expressed:1 n2i:1 religion:5 sport:4 recommendation:1 satisfies:1 conditional:1 goal:1 identity:1 viewed:1 sized:1 replace:1 infinite:1 except:1 reducing:1 sampler:1 principal:1 called:3 total:4 newsgroup:1 select:2 internal:6 support:2 allelic:1 fulfills:1 collins:1 incorporate:1 evaluate:2 mcmc:1 correlated:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.